Test Report: QEMU_macOS 19616

                    
                      ead8b21730629246ae204938704f78710656bdeb:2024-09-12:36186
                    
                

Test fail (98/274)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 29.02
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 9.93
33 TestAddons/parallel/Registry 71.33
46 TestCertOptions 10.17
47 TestCertExpiration 195.36
48 TestDockerFlags 10.34
49 TestForceSystemdFlag 10.4
50 TestForceSystemdEnv 10.52
95 TestFunctional/parallel/ServiceCmdConnect 29.19
167 TestMultiControlPlane/serial/StopSecondaryNode 214.13
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 102.8
169 TestMultiControlPlane/serial/RestartSecondaryNode 208.97
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.42
172 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.02
174 TestMultiControlPlane/serial/StopCluster 202.09
175 TestMultiControlPlane/serial/RestartCluster 5.25
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
177 TestMultiControlPlane/serial/AddSecondaryNode 0.07
181 TestImageBuild/serial/Setup 10.01
184 TestJSONOutput/start/Command 9.83
190 TestJSONOutput/pause/Command 0.08
196 TestJSONOutput/unpause/Command 0.04
213 TestMinikubeProfile 10.42
216 TestMountStart/serial/StartWithMountFirst 10.11
219 TestMultiNode/serial/FreshStart2Nodes 10.01
220 TestMultiNode/serial/DeployApp2Nodes 104.44
221 TestMultiNode/serial/PingHostFrom2Pods 0.09
222 TestMultiNode/serial/AddNode 0.07
223 TestMultiNode/serial/MultiNodeLabels 0.06
224 TestMultiNode/serial/ProfileList 0.08
225 TestMultiNode/serial/CopyFile 0.06
226 TestMultiNode/serial/StopNode 0.14
227 TestMultiNode/serial/StartAfterStop 46.17
228 TestMultiNode/serial/RestartKeepsNodes 7.34
229 TestMultiNode/serial/DeleteNode 0.1
230 TestMultiNode/serial/StopMultiNode 1.96
231 TestMultiNode/serial/RestartMultiNode 5.25
232 TestMultiNode/serial/ValidateNameConflict 20.26
236 TestPreload 10.03
238 TestScheduledStopUnix 10.04
239 TestSkaffold 12.49
242 TestRunningBinaryUpgrade 608.78
244 TestKubernetesUpgrade 18.56
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.36
258 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.16
260 TestStoppedBinaryUpgrade/Upgrade 581.51
262 TestPause/serial/Start 9.92
272 TestNoKubernetes/serial/StartWithK8s 10.02
273 TestNoKubernetes/serial/StartWithStopK8s 5.29
274 TestNoKubernetes/serial/Start 5.31
278 TestNoKubernetes/serial/StartNoArgs 5.29
280 TestNetworkPlugins/group/auto/Start 9.9
281 TestNetworkPlugins/group/kindnet/Start 9.84
282 TestNetworkPlugins/group/calico/Start 9.76
283 TestNetworkPlugins/group/custom-flannel/Start 9.92
284 TestNetworkPlugins/group/false/Start 9.93
285 TestNetworkPlugins/group/enable-default-cni/Start 9.8
286 TestNetworkPlugins/group/flannel/Start 9.85
287 TestNetworkPlugins/group/bridge/Start 9.83
288 TestNetworkPlugins/group/kubenet/Start 9.85
291 TestStartStop/group/old-k8s-version/serial/FirstStart 9.75
292 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
296 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
297 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
300 TestStartStop/group/old-k8s-version/serial/Pause 0.1
302 TestStartStop/group/no-preload/serial/FirstStart 10.06
304 TestStartStop/group/embed-certs/serial/FirstStart 11.15
305 TestStartStop/group/no-preload/serial/DeployApp 0.1
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.15
309 TestStartStop/group/no-preload/serial/SecondStart 7.38
310 TestStartStop/group/embed-certs/serial/DeployApp 0.1
311 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
312 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
314 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
315 TestStartStop/group/no-preload/serial/Pause 0.11
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.91
320 TestStartStop/group/embed-certs/serial/SecondStart 6.27
321 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
322 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
323 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.13
325 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
326 TestStartStop/group/embed-certs/serial/Pause 0.11
329 TestStartStop/group/newest-cni/serial/FirstStart 9.92
331 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.8
332 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
333 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
334 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.08
335 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
340 TestStartStop/group/newest-cni/serial/SecondStart 5.25
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
344 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (29.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-639000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-639000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (29.023294209s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7bbe6aed-c8af-4656-8b91-01ec1cc958e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-639000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e925a130-2846-4b2a-b952-413c80d2df25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19616"}}
	{"specversion":"1.0","id":"b484cfba-a11e-4374-8715-31989548ec87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig"}}
	{"specversion":"1.0","id":"c2daa682-0f7e-499f-8b17-588eaee6877e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"ff094065-355e-42e9-9c24-234308c3ae09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8bc54184-4d3d-4ee2-8f41-d000735d5df7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube"}}
	{"specversion":"1.0","id":"31a68e26-c298-4a6b-88db-8300f9ed56a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"23b35593-7f2e-49ba-8a7b-ee641936072b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c5f903df-af66-4437-b5e2-ca7ce0d7fa15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"4ba0172b-bdfe-49e7-afbe-e876a0729549","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f6348c0a-0cfe-435f-b8fd-0644d546380b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-639000\" primary control-plane node in \"download-only-639000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"78e8b4a9-9d65-4ad5-94e4-c460612997cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d4a300c2-d131-4744-9b52-76a708d925a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19616-1259/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106d07e80 0x106d07e80 0x106d07e80 0x106d07e80 0x106d07e80 0x106d07e80 0x106d07e80] Decompressors:map[bz2:0x140007d3440 gz:0x140007d3448 tar:0x140007d33f0 tar.bz2:0x140007d3400 tar.gz:0x140007d3410 tar.xz:0x140007d3420 tar.zst:0x140007d3430 tbz2:0x140007d3400 tgz:0x14
0007d3410 txz:0x140007d3420 tzst:0x140007d3430 xz:0x140007d3450 zip:0x140007d3460 zst:0x140007d3458] Getters:map[file:0x140004a2fb0 http:0x140000b4fa0 https:0x140000b4ff0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"a830b10d-6c0b-44e1-b0b3-48ffab9ebdd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 14:27:59.692886    1786 out.go:345] Setting OutFile to fd 1 ...
	I0912 14:27:59.693031    1786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 14:27:59.693034    1786 out.go:358] Setting ErrFile to fd 2...
	I0912 14:27:59.693037    1786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 14:27:59.693201    1786 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	W0912 14:27:59.693280    1786 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19616-1259/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19616-1259/.minikube/config/config.json: no such file or directory
	I0912 14:27:59.694596    1786 out.go:352] Setting JSON to true
	I0912 14:27:59.712051    1786 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1643,"bootTime":1726174836,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 14:27:59.712119    1786 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 14:27:59.718593    1786 out.go:97] [download-only-639000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 14:27:59.718721    1786 notify.go:220] Checking for updates...
	W0912 14:27:59.718752    1786 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball: no such file or directory
	I0912 14:27:59.720008    1786 out.go:169] MINIKUBE_LOCATION=19616
	I0912 14:27:59.722483    1786 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 14:27:59.727525    1786 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 14:27:59.729166    1786 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 14:27:59.732519    1786 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	W0912 14:27:59.738534    1786 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0912 14:27:59.738790    1786 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 14:27:59.743467    1786 out.go:97] Using the qemu2 driver based on user configuration
	I0912 14:27:59.743485    1786 start.go:297] selected driver: qemu2
	I0912 14:27:59.743500    1786 start.go:901] validating driver "qemu2" against <nil>
	I0912 14:27:59.743585    1786 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 14:27:59.746471    1786 out.go:169] Automatically selected the socket_vmnet network
	I0912 14:27:59.752084    1786 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0912 14:27:59.752174    1786 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0912 14:27:59.752236    1786 cni.go:84] Creating CNI manager for ""
	I0912 14:27:59.752252    1786 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0912 14:27:59.752298    1786 start.go:340] cluster config:
	{Name:download-only-639000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-639000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 14:27:59.757396    1786 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 14:27:59.761522    1786 out.go:97] Downloading VM boot image ...
	I0912 14:27:59.761546    1786 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso
	I0912 14:28:17.053394    1786 out.go:97] Starting "download-only-639000" primary control-plane node in "download-only-639000" cluster
	I0912 14:28:17.053423    1786 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0912 14:28:17.123500    1786 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0912 14:28:17.123534    1786 cache.go:56] Caching tarball of preloaded images
	I0912 14:28:17.123708    1786 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0912 14:28:17.128705    1786 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0912 14:28:17.128714    1786 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0912 14:28:17.215666    1786 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0912 14:28:27.388393    1786 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0912 14:28:27.388571    1786 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0912 14:28:28.084022    1786 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0912 14:28:28.084209    1786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/download-only-639000/config.json ...
	I0912 14:28:28.084227    1786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/download-only-639000/config.json: {Name:mk21f07567c0099c45babb8851d4182d9e947dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:28:28.084460    1786 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0912 14:28:28.084655    1786 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0912 14:28:28.636305    1786 out.go:193] 
	W0912 14:28:28.644235    1786 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19616-1259/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106d07e80 0x106d07e80 0x106d07e80 0x106d07e80 0x106d07e80 0x106d07e80 0x106d07e80] Decompressors:map[bz2:0x140007d3440 gz:0x140007d3448 tar:0x140007d33f0 tar.bz2:0x140007d3400 tar.gz:0x140007d3410 tar.xz:0x140007d3420 tar.zst:0x140007d3430 tbz2:0x140007d3400 tgz:0x140007d3410 txz:0x140007d3420 tzst:0x140007d3430 xz:0x140007d3450 zip:0x140007d3460 zst:0x140007d3458] Getters:map[file:0x140004a2fb0 http:0x140000b4fa0 https:0x140000b4ff0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0912 14:28:28.644260    1786 out_reason.go:110] 
	W0912 14:28:28.654212    1786 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 14:28:28.658126    1786 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-639000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (29.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19616-1259/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.93s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-315000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-315000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.774180333s)

                                                
                                                
-- stdout --
	* [offline-docker-315000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-315000" primary control-plane node in "offline-docker-315000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-315000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:14:02.972582    4413 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:14:02.972716    4413 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:14:02.972720    4413 out.go:358] Setting ErrFile to fd 2...
	I0912 15:14:02.972722    4413 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:14:02.972840    4413 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:14:02.974390    4413 out.go:352] Setting JSON to false
	I0912 15:14:02.992061    4413 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4406,"bootTime":1726174836,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:14:02.992129    4413 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:14:02.997478    4413 out.go:177] * [offline-docker-315000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:14:03.005318    4413 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:14:03.005345    4413 notify.go:220] Checking for updates...
	I0912 15:14:03.011225    4413 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:14:03.014313    4413 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:14:03.017327    4413 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:14:03.018912    4413 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:14:03.021221    4413 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:14:03.024726    4413 config.go:182] Loaded profile config "multinode-323000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:14:03.024784    4413 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:14:03.029115    4413 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:14:03.036265    4413 start.go:297] selected driver: qemu2
	I0912 15:14:03.036274    4413 start.go:901] validating driver "qemu2" against <nil>
	I0912 15:14:03.036281    4413 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:14:03.038010    4413 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 15:14:03.041262    4413 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:14:03.044305    4413 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:14:03.044326    4413 cni.go:84] Creating CNI manager for ""
	I0912 15:14:03.044334    4413 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:14:03.044339    4413 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 15:14:03.044378    4413 start.go:340] cluster config:
	{Name:offline-docker-315000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-315000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:14:03.048023    4413 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:14:03.052304    4413 out.go:177] * Starting "offline-docker-315000" primary control-plane node in "offline-docker-315000" cluster
	I0912 15:14:03.060238    4413 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 15:14:03.060261    4413 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 15:14:03.060271    4413 cache.go:56] Caching tarball of preloaded images
	I0912 15:14:03.060329    4413 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:14:03.060339    4413 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 15:14:03.060404    4413 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/offline-docker-315000/config.json ...
	I0912 15:14:03.060414    4413 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/offline-docker-315000/config.json: {Name:mk28123986edfac284c1165112a28c69250693ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:14:03.060724    4413 start.go:360] acquireMachinesLock for offline-docker-315000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:14:03.060759    4413 start.go:364] duration metric: took 26µs to acquireMachinesLock for "offline-docker-315000"
	I0912 15:14:03.060775    4413 start.go:93] Provisioning new machine with config: &{Name:offline-docker-315000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-315000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:14:03.060799    4413 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:14:03.068272    4413 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0912 15:14:03.084036    4413 start.go:159] libmachine.API.Create for "offline-docker-315000" (driver="qemu2")
	I0912 15:14:03.084076    4413 client.go:168] LocalClient.Create starting
	I0912 15:14:03.084161    4413 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:14:03.084191    4413 main.go:141] libmachine: Decoding PEM data...
	I0912 15:14:03.084204    4413 main.go:141] libmachine: Parsing certificate...
	I0912 15:14:03.084246    4413 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:14:03.084269    4413 main.go:141] libmachine: Decoding PEM data...
	I0912 15:14:03.084278    4413 main.go:141] libmachine: Parsing certificate...
	I0912 15:14:03.084669    4413 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:14:03.244474    4413 main.go:141] libmachine: Creating SSH key...
	I0912 15:14:03.279443    4413 main.go:141] libmachine: Creating Disk image...
	I0912 15:14:03.279454    4413 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:14:03.279703    4413 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/offline-docker-315000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/offline-docker-315000/disk.qcow2
	I0912 15:14:03.293508    4413 main.go:141] libmachine: STDOUT: 
	I0912 15:14:03.293532    4413 main.go:141] libmachine: STDERR: 
	I0912 15:14:03.293581    4413 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/offline-docker-315000/disk.qcow2 +20000M
	I0912 15:14:03.310880    4413 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:14:03.310898    4413 main.go:141] libmachine: STDERR: 
	I0912 15:14:03.310909    4413 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/offline-docker-315000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/offline-docker-315000/disk.qcow2
	I0912 15:14:03.310913    4413 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:14:03.310928    4413 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:14:03.310958    4413 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/offline-docker-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/offline-docker-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/offline-docker-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:e4:7e:f6:76:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/offline-docker-315000/disk.qcow2
	I0912 15:14:03.312515    4413 main.go:141] libmachine: STDOUT: 
	I0912 15:14:03.312537    4413 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:14:03.312555    4413 client.go:171] duration metric: took 228.482167ms to LocalClient.Create
	I0912 15:14:05.314614    4413 start.go:128] duration metric: took 2.253867625s to createHost
	I0912 15:14:05.314656    4413 start.go:83] releasing machines lock for "offline-docker-315000", held for 2.253955417s
	W0912 15:14:05.314681    4413 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:14:05.329994    4413 out.go:177] * Deleting "offline-docker-315000" in qemu2 ...
	W0912 15:14:05.348312    4413 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:14:05.348322    4413 start.go:729] Will try again in 5 seconds ...
	I0912 15:14:10.350458    4413 start.go:360] acquireMachinesLock for offline-docker-315000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:14:10.350936    4413 start.go:364] duration metric: took 340.875µs to acquireMachinesLock for "offline-docker-315000"
	I0912 15:14:10.351068    4413 start.go:93] Provisioning new machine with config: &{Name:offline-docker-315000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-315000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:14:10.351456    4413 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:14:10.361057    4413 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0912 15:14:10.412118    4413 start.go:159] libmachine.API.Create for "offline-docker-315000" (driver="qemu2")
	I0912 15:14:10.412171    4413 client.go:168] LocalClient.Create starting
	I0912 15:14:10.412281    4413 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:14:10.412339    4413 main.go:141] libmachine: Decoding PEM data...
	I0912 15:14:10.412359    4413 main.go:141] libmachine: Parsing certificate...
	I0912 15:14:10.412433    4413 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:14:10.412476    4413 main.go:141] libmachine: Decoding PEM data...
	I0912 15:14:10.412486    4413 main.go:141] libmachine: Parsing certificate...
	I0912 15:14:10.412988    4413 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:14:10.580993    4413 main.go:141] libmachine: Creating SSH key...
	I0912 15:14:10.650092    4413 main.go:141] libmachine: Creating Disk image...
	I0912 15:14:10.650097    4413 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:14:10.650326    4413 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/offline-docker-315000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/offline-docker-315000/disk.qcow2
	I0912 15:14:10.659563    4413 main.go:141] libmachine: STDOUT: 
	I0912 15:14:10.659647    4413 main.go:141] libmachine: STDERR: 
	I0912 15:14:10.659695    4413 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/offline-docker-315000/disk.qcow2 +20000M
	I0912 15:14:10.667370    4413 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:14:10.667431    4413 main.go:141] libmachine: STDERR: 
	I0912 15:14:10.667443    4413 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/offline-docker-315000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/offline-docker-315000/disk.qcow2
	I0912 15:14:10.667448    4413 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:14:10.667459    4413 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:14:10.667500    4413 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/offline-docker-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/offline-docker-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/offline-docker-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:6b:31:07:bf:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/offline-docker-315000/disk.qcow2
	I0912 15:14:10.669001    4413 main.go:141] libmachine: STDOUT: 
	I0912 15:14:10.669017    4413 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:14:10.669029    4413 client.go:171] duration metric: took 256.859875ms to LocalClient.Create
	I0912 15:14:12.671155    4413 start.go:128] duration metric: took 2.319710416s to createHost
	I0912 15:14:12.671211    4413 start.go:83] releasing machines lock for "offline-docker-315000", held for 2.320316125s
	W0912 15:14:12.671541    4413 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-315000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-315000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:14:12.688268    4413 out.go:201] 
	W0912 15:14:12.692316    4413 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:14:12.692360    4413 out.go:270] * 
	* 
	W0912 15:14:12.694975    4413 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:14:12.706121    4413 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-315000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-09-12 15:14:12.718488 -0700 PDT m=+2773.179790501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-315000 -n offline-docker-315000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-315000 -n offline-docker-315000: exit status 7 (68.77675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-315000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-315000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-315000
--- FAIL: TestOffline (9.93s)

                                                
                                    
x
+
TestAddons/parallel/Registry (71.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.221666ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-p5f26" [d42d83d3-de78-4a99-ab0d-4539040c1a33] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006457833s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-zh429" [21dafca9-6625-404e-9961-8c638a8f1694] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009831s
addons_test.go:342: (dbg) Run:  kubectl --context addons-094000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-094000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-094000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.065153916s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-094000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-094000 ip
2024/09/12 14:41:58 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-094000 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-094000 -n addons-094000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-094000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-639000 | jenkins | v1.34.0 | 12 Sep 24 14:27 PDT |                     |
	|         | -p download-only-639000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 12 Sep 24 14:28 PDT | 12 Sep 24 14:28 PDT |
	| delete  | -p download-only-639000              | download-only-639000 | jenkins | v1.34.0 | 12 Sep 24 14:28 PDT | 12 Sep 24 14:28 PDT |
	| start   | -o=json --download-only              | download-only-057000 | jenkins | v1.34.0 | 12 Sep 24 14:28 PDT |                     |
	|         | -p download-only-057000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 12 Sep 24 14:28 PDT | 12 Sep 24 14:28 PDT |
	| delete  | -p download-only-057000              | download-only-057000 | jenkins | v1.34.0 | 12 Sep 24 14:28 PDT | 12 Sep 24 14:28 PDT |
	| delete  | -p download-only-639000              | download-only-639000 | jenkins | v1.34.0 | 12 Sep 24 14:28 PDT | 12 Sep 24 14:28 PDT |
	| delete  | -p download-only-057000              | download-only-057000 | jenkins | v1.34.0 | 12 Sep 24 14:28 PDT | 12 Sep 24 14:28 PDT |
	| start   | --download-only -p                   | binary-mirror-484000 | jenkins | v1.34.0 | 12 Sep 24 14:28 PDT |                     |
	|         | binary-mirror-484000                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49312               |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-484000              | binary-mirror-484000 | jenkins | v1.34.0 | 12 Sep 24 14:28 PDT | 12 Sep 24 14:28 PDT |
	| addons  | disable dashboard -p                 | addons-094000        | jenkins | v1.34.0 | 12 Sep 24 14:28 PDT |                     |
	|         | addons-094000                        |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-094000        | jenkins | v1.34.0 | 12 Sep 24 14:28 PDT |                     |
	|         | addons-094000                        |                      |         |         |                     |                     |
	| start   | -p addons-094000 --wait=true         | addons-094000        | jenkins | v1.34.0 | 12 Sep 24 14:28 PDT | 12 Sep 24 14:32 PDT |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	| addons  | addons-094000 addons disable         | addons-094000        | jenkins | v1.34.0 | 12 Sep 24 14:32 PDT | 12 Sep 24 14:32 PDT |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-094000 addons                 | addons-094000        | jenkins | v1.34.0 | 12 Sep 24 14:41 PDT | 12 Sep 24 14:41 PDT |
	|         | disable csi-hostpath-driver          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-094000 addons                 | addons-094000        | jenkins | v1.34.0 | 12 Sep 24 14:41 PDT | 12 Sep 24 14:41 PDT |
	|         | disable volumesnapshots              |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-094000 addons disable         | addons-094000        | jenkins | v1.34.0 | 12 Sep 24 14:41 PDT | 12 Sep 24 14:41 PDT |
	|         | yakd --alsologtostderr -v=1          |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-094000        | jenkins | v1.34.0 | 12 Sep 24 14:41 PDT | 12 Sep 24 14:41 PDT |
	|         | -p addons-094000                     |                      |         |         |                     |                     |
	| ip      | addons-094000 ip                     | addons-094000        | jenkins | v1.34.0 | 12 Sep 24 14:41 PDT | 12 Sep 24 14:41 PDT |
	| addons  | addons-094000 addons disable         | addons-094000        | jenkins | v1.34.0 | 12 Sep 24 14:41 PDT | 12 Sep 24 14:41 PDT |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 14:28:41
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 14:28:41.951014    1865 out.go:345] Setting OutFile to fd 1 ...
	I0912 14:28:41.951148    1865 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 14:28:41.951151    1865 out.go:358] Setting ErrFile to fd 2...
	I0912 14:28:41.951154    1865 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 14:28:41.951299    1865 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 14:28:41.952441    1865 out.go:352] Setting JSON to false
	I0912 14:28:41.968636    1865 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1685,"bootTime":1726174836,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 14:28:41.968705    1865 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 14:28:41.973566    1865 out.go:177] * [addons-094000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 14:28:41.980541    1865 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 14:28:41.980578    1865 notify.go:220] Checking for updates...
	I0912 14:28:41.987602    1865 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 14:28:41.990582    1865 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 14:28:41.993546    1865 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 14:28:41.996578    1865 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 14:28:41.999520    1865 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 14:28:42.002734    1865 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 14:28:42.006595    1865 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 14:28:42.013556    1865 start.go:297] selected driver: qemu2
	I0912 14:28:42.013561    1865 start.go:901] validating driver "qemu2" against <nil>
	I0912 14:28:42.013568    1865 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 14:28:42.015786    1865 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 14:28:42.018634    1865 out.go:177] * Automatically selected the socket_vmnet network
	I0912 14:28:42.021674    1865 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 14:28:42.021708    1865 cni.go:84] Creating CNI manager for ""
	I0912 14:28:42.021717    1865 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 14:28:42.021726    1865 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 14:28:42.021755    1865 start.go:340] cluster config:
	{Name:addons-094000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-094000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 14:28:42.025460    1865 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 14:28:42.034612    1865 out.go:177] * Starting "addons-094000" primary control-plane node in "addons-094000" cluster
	I0912 14:28:42.038550    1865 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 14:28:42.038566    1865 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 14:28:42.038577    1865 cache.go:56] Caching tarball of preloaded images
	I0912 14:28:42.038642    1865 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 14:28:42.038653    1865 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 14:28:42.038893    1865 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/config.json ...
	I0912 14:28:42.038905    1865 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/config.json: {Name:mkc01a783ec489d0fcd9886914f67f3011b51814 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:28:42.039322    1865 start.go:360] acquireMachinesLock for addons-094000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 14:28:42.039398    1865 start.go:364] duration metric: took 69.334µs to acquireMachinesLock for "addons-094000"
	I0912 14:28:42.039413    1865 start.go:93] Provisioning new machine with config: &{Name:addons-094000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-094000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 14:28:42.039446    1865 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 14:28:42.044607    1865 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0912 14:28:42.285005    1865 start.go:159] libmachine.API.Create for "addons-094000" (driver="qemu2")
	I0912 14:28:42.285039    1865 client.go:168] LocalClient.Create starting
	I0912 14:28:42.285230    1865 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 14:28:42.339175    1865 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 14:28:42.480044    1865 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 14:28:43.238654    1865 main.go:141] libmachine: Creating SSH key...
	I0912 14:28:43.650138    1865 main.go:141] libmachine: Creating Disk image...
	I0912 14:28:43.650151    1865 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 14:28:43.650494    1865 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/disk.qcow2
	I0912 14:28:43.669101    1865 main.go:141] libmachine: STDOUT: 
	I0912 14:28:43.669130    1865 main.go:141] libmachine: STDERR: 
	I0912 14:28:43.669193    1865 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/disk.qcow2 +20000M
	I0912 14:28:43.677665    1865 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 14:28:43.677679    1865 main.go:141] libmachine: STDERR: 
	I0912 14:28:43.677693    1865 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/disk.qcow2
	I0912 14:28:43.677701    1865 main.go:141] libmachine: Starting QEMU VM...
	I0912 14:28:43.677741    1865 qemu.go:418] Using hvf for hardware acceleration
	I0912 14:28:43.677766    1865 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:98:84:b4:ed:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/disk.qcow2
	I0912 14:28:43.739801    1865 main.go:141] libmachine: STDOUT: 
	I0912 14:28:43.739836    1865 main.go:141] libmachine: STDERR: 
	I0912 14:28:43.739840    1865 main.go:141] libmachine: Attempt 0
	I0912 14:28:43.739855    1865 main.go:141] libmachine: Searching for 76:98:84:b4:ed:b8 in /var/db/dhcpd_leases ...
	I0912 14:28:43.739910    1865 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0912 14:28:43.739929    1865 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e4ae52}
	I0912 14:28:45.742027    1865 main.go:141] libmachine: Attempt 1
	I0912 14:28:45.742101    1865 main.go:141] libmachine: Searching for 76:98:84:b4:ed:b8 in /var/db/dhcpd_leases ...
	I0912 14:28:45.742424    1865 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0912 14:28:45.742475    1865 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e4ae52}
	I0912 14:28:47.744820    1865 main.go:141] libmachine: Attempt 2
	I0912 14:28:47.744907    1865 main.go:141] libmachine: Searching for 76:98:84:b4:ed:b8 in /var/db/dhcpd_leases ...
	I0912 14:28:47.745138    1865 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0912 14:28:47.745196    1865 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e4ae52}
	I0912 14:28:49.746204    1865 main.go:141] libmachine: Attempt 3
	I0912 14:28:49.746227    1865 main.go:141] libmachine: Searching for 76:98:84:b4:ed:b8 in /var/db/dhcpd_leases ...
	I0912 14:28:49.746290    1865 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0912 14:28:49.746303    1865 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e4ae52}
	I0912 14:28:51.748304    1865 main.go:141] libmachine: Attempt 4
	I0912 14:28:51.748322    1865 main.go:141] libmachine: Searching for 76:98:84:b4:ed:b8 in /var/db/dhcpd_leases ...
	I0912 14:28:51.748405    1865 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0912 14:28:51.748414    1865 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e4ae52}
	I0912 14:28:53.750409    1865 main.go:141] libmachine: Attempt 5
	I0912 14:28:53.750417    1865 main.go:141] libmachine: Searching for 76:98:84:b4:ed:b8 in /var/db/dhcpd_leases ...
	I0912 14:28:53.750449    1865 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0912 14:28:53.750456    1865 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e4ae52}
	I0912 14:28:55.752486    1865 main.go:141] libmachine: Attempt 6
	I0912 14:28:55.752511    1865 main.go:141] libmachine: Searching for 76:98:84:b4:ed:b8 in /var/db/dhcpd_leases ...
	I0912 14:28:55.752591    1865 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0912 14:28:55.752600    1865 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e4ae52}
	I0912 14:28:57.754696    1865 main.go:141] libmachine: Attempt 7
	I0912 14:28:57.754768    1865 main.go:141] libmachine: Searching for 76:98:84:b4:ed:b8 in /var/db/dhcpd_leases ...
	I0912 14:28:57.755160    1865 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0912 14:28:57.755212    1865 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:76:98:84:b4:ed:b8 ID:1,76:98:84:b4:ed:b8 Lease:0x66e4ae98}
	I0912 14:28:57.755228    1865 main.go:141] libmachine: Found match: 76:98:84:b4:ed:b8
	I0912 14:28:57.755262    1865 main.go:141] libmachine: IP: 192.168.105.2
	I0912 14:28:57.755285    1865 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0912 14:29:00.777090    1865 machine.go:93] provisionDockerMachine start ...
	I0912 14:29:00.778450    1865 main.go:141] libmachine: Using SSH client type: native
	I0912 14:29:00.778922    1865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051ebba0] 0x1051ee400 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0912 14:29:00.778938    1865 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 14:29:00.849327    1865 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0912 14:29:00.849357    1865 buildroot.go:166] provisioning hostname "addons-094000"
	I0912 14:29:00.849479    1865 main.go:141] libmachine: Using SSH client type: native
	I0912 14:29:00.849715    1865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051ebba0] 0x1051ee400 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0912 14:29:00.849724    1865 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-094000 && echo "addons-094000" | sudo tee /etc/hostname
	I0912 14:29:00.908091    1865 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-094000
	
	I0912 14:29:00.908141    1865 main.go:141] libmachine: Using SSH client type: native
	I0912 14:29:00.908279    1865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051ebba0] 0x1051ee400 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0912 14:29:00.908288    1865 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-094000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-094000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-094000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 14:29:00.958652    1865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 14:29:00.958671    1865 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19616-1259/.minikube CaCertPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19616-1259/.minikube}
	I0912 14:29:00.958679    1865 buildroot.go:174] setting up certificates
	I0912 14:29:00.958683    1865 provision.go:84] configureAuth start
	I0912 14:29:00.958687    1865 provision.go:143] copyHostCerts
	I0912 14:29:00.958789    1865 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.pem (1078 bytes)
	I0912 14:29:00.959039    1865 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19616-1259/.minikube/cert.pem (1123 bytes)
	I0912 14:29:00.959156    1865 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19616-1259/.minikube/key.pem (1675 bytes)
	I0912 14:29:00.959246    1865 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca-key.pem org=jenkins.addons-094000 san=[127.0.0.1 192.168.105.2 addons-094000 localhost minikube]
	I0912 14:29:01.040363    1865 provision.go:177] copyRemoteCerts
	I0912 14:29:01.040418    1865 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 14:29:01.040437    1865 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/id_rsa Username:docker}
	I0912 14:29:01.067422    1865 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0912 14:29:01.075839    1865 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0912 14:29:01.083906    1865 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 14:29:01.092110    1865 provision.go:87] duration metric: took 133.419375ms to configureAuth
	I0912 14:29:01.092117    1865 buildroot.go:189] setting minikube options for container-runtime
	I0912 14:29:01.092218    1865 config.go:182] Loaded profile config "addons-094000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 14:29:01.092252    1865 main.go:141] libmachine: Using SSH client type: native
	I0912 14:29:01.092336    1865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051ebba0] 0x1051ee400 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0912 14:29:01.092341    1865 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0912 14:29:01.141484    1865 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0912 14:29:01.141492    1865 buildroot.go:70] root file system type: tmpfs
	I0912 14:29:01.141553    1865 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0912 14:29:01.141601    1865 main.go:141] libmachine: Using SSH client type: native
	I0912 14:29:01.141706    1865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051ebba0] 0x1051ee400 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0912 14:29:01.141739    1865 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0912 14:29:01.193048    1865 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0912 14:29:01.193098    1865 main.go:141] libmachine: Using SSH client type: native
	I0912 14:29:01.193213    1865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051ebba0] 0x1051ee400 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0912 14:29:01.193221    1865 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0912 14:29:02.560988    1865 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0912 14:29:02.561001    1865 machine.go:96] duration metric: took 1.783935708s to provisionDockerMachine
	I0912 14:29:02.561008    1865 client.go:171] duration metric: took 20.276556708s to LocalClient.Create
	I0912 14:29:02.561024    1865 start.go:167] duration metric: took 20.2766145s to libmachine.API.Create "addons-094000"
	I0912 14:29:02.561029    1865 start.go:293] postStartSetup for "addons-094000" (driver="qemu2")
	I0912 14:29:02.561037    1865 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 14:29:02.561118    1865 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 14:29:02.561128    1865 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/id_rsa Username:docker}
	I0912 14:29:02.587405    1865 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 14:29:02.589186    1865 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 14:29:02.589200    1865 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19616-1259/.minikube/addons for local assets ...
	I0912 14:29:02.589297    1865 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19616-1259/.minikube/files for local assets ...
	I0912 14:29:02.589328    1865 start.go:296] duration metric: took 28.296833ms for postStartSetup
	I0912 14:29:02.589721    1865 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/config.json ...
	I0912 14:29:02.589911    1865 start.go:128] duration metric: took 20.5510595s to createHost
	I0912 14:29:02.589937    1865 main.go:141] libmachine: Using SSH client type: native
	I0912 14:29:02.590027    1865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051ebba0] 0x1051ee400 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0912 14:29:02.590035    1865 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 14:29:02.637136    1865 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726176542.977563086
	
	I0912 14:29:02.637147    1865 fix.go:216] guest clock: 1726176542.977563086
	I0912 14:29:02.637151    1865 fix.go:229] Guest: 2024-09-12 14:29:02.977563086 -0700 PDT Remote: 2024-09-12 14:29:02.589915 -0700 PDT m=+20.658409626 (delta=387.648086ms)
	I0912 14:29:02.637171    1865 fix.go:200] guest clock delta is within tolerance: 387.648086ms
	I0912 14:29:02.637174    1865 start.go:83] releasing machines lock for "addons-094000", held for 20.598371292s
	I0912 14:29:02.637457    1865 ssh_runner.go:195] Run: cat /version.json
	I0912 14:29:02.637459    1865 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 14:29:02.637466    1865 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/id_rsa Username:docker}
	I0912 14:29:02.637480    1865 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/id_rsa Username:docker}
	I0912 14:29:02.706237    1865 ssh_runner.go:195] Run: systemctl --version
	I0912 14:29:02.708694    1865 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 14:29:02.710880    1865 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 14:29:02.710913    1865 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 14:29:02.717414    1865 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 14:29:02.717423    1865 start.go:495] detecting cgroup driver to use...
	I0912 14:29:02.717537    1865 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 14:29:02.724025    1865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0912 14:29:02.727787    1865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0912 14:29:02.731582    1865 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0912 14:29:02.731608    1865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0912 14:29:02.735462    1865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 14:29:02.739510    1865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0912 14:29:02.743217    1865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 14:29:02.747099    1865 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 14:29:02.750936    1865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0912 14:29:02.754713    1865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0912 14:29:02.758248    1865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0912 14:29:02.761807    1865 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 14:29:02.764989    1865 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 14:29:02.768315    1865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 14:29:02.841583    1865 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0912 14:29:02.848242    1865 start.go:495] detecting cgroup driver to use...
	I0912 14:29:02.848290    1865 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0912 14:29:02.856022    1865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 14:29:02.861503    1865 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 14:29:02.869138    1865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 14:29:02.874317    1865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 14:29:02.879453    1865 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0912 14:29:02.923137    1865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 14:29:02.929134    1865 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 14:29:02.935509    1865 ssh_runner.go:195] Run: which cri-dockerd
	I0912 14:29:02.936774    1865 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0912 14:29:02.940223    1865 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0912 14:29:02.946214    1865 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0912 14:29:03.013736    1865 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0912 14:29:03.106718    1865 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0912 14:29:03.106788    1865 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0912 14:29:03.112716    1865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 14:29:03.181665    1865 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0912 14:29:05.378744    1865 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.197126583s)
	I0912 14:29:05.378818    1865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0912 14:29:05.384381    1865 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0912 14:29:05.391493    1865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0912 14:29:05.397003    1865 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0912 14:29:05.465790    1865 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0912 14:29:05.543960    1865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 14:29:05.626618    1865 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0912 14:29:05.633128    1865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0912 14:29:05.638274    1865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 14:29:05.706147    1865 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0912 14:29:05.731381    1865 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0912 14:29:05.731468    1865 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0912 14:29:05.734365    1865 start.go:563] Will wait 60s for crictl version
	I0912 14:29:05.734411    1865 ssh_runner.go:195] Run: which crictl
	I0912 14:29:05.736352    1865 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 14:29:05.756278    1865 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0912 14:29:05.756350    1865 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0912 14:29:05.767854    1865 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0912 14:29:05.782706    1865 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0912 14:29:05.782859    1865 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0912 14:29:05.784209    1865 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 14:29:05.788398    1865 kubeadm.go:883] updating cluster {Name:addons-094000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-094000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 14:29:05.788450    1865 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 14:29:05.788493    1865 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0912 14:29:05.793793    1865 docker.go:685] Got preloaded images: 
	I0912 14:29:05.793800    1865 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0912 14:29:05.793847    1865 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0912 14:29:05.797256    1865 ssh_runner.go:195] Run: which lz4
	I0912 14:29:05.798605    1865 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 14:29:05.800231    1865 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 14:29:05.800242    1865 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (322160019 bytes)
	I0912 14:29:07.043723    1865 docker.go:649] duration metric: took 1.245184084s to copy over tarball
	I0912 14:29:07.043780    1865 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 14:29:07.989067    1865 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 14:29:08.003678    1865 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0912 14:29:08.007700    1865 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0912 14:29:08.013770    1865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 14:29:08.098335    1865 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0912 14:29:10.856729    1865 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.758457375s)
	I0912 14:29:10.856828    1865 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0912 14:29:10.863021    1865 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0912 14:29:10.863040    1865 cache_images.go:84] Images are preloaded, skipping loading
	I0912 14:29:10.863045    1865 kubeadm.go:934] updating node { 192.168.105.2 8443 v1.31.1 docker true true} ...
	I0912 14:29:10.863109    1865 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-094000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-094000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 14:29:10.863181    1865 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0912 14:29:10.884828    1865 cni.go:84] Creating CNI manager for ""
	I0912 14:29:10.884841    1865 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 14:29:10.884849    1865 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 14:29:10.884858    1865 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-094000 NodeName:addons-094000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 14:29:10.884914    1865 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-094000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 14:29:10.884963    1865 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 14:29:10.888629    1865 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 14:29:10.888659    1865 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 14:29:10.892102    1865 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0912 14:29:10.897811    1865 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 14:29:10.903649    1865 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0912 14:29:10.910848    1865 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0912 14:29:10.912091    1865 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 14:29:10.916189    1865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 14:29:10.986682    1865 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 14:29:10.993518    1865 certs.go:68] Setting up /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000 for IP: 192.168.105.2
	I0912 14:29:10.993527    1865 certs.go:194] generating shared ca certs ...
	I0912 14:29:10.993549    1865 certs.go:226] acquiring lock for ca certs: {Name:mkbb0c3f29ef431420fb2bc7ce1073854ddb346b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:29:10.993734    1865 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.key
	I0912 14:29:11.048755    1865 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.crt ...
	I0912 14:29:11.048765    1865 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.crt: {Name:mkaed199b4d59b979240a0906f967519ceb76f24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:29:11.049069    1865 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.key ...
	I0912 14:29:11.049073    1865 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.key: {Name:mk6feac05879b259cfe898af73a36e26bd5df1ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:29:11.049201    1865 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/proxy-client-ca.key
	I0912 14:29:11.261671    1865 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19616-1259/.minikube/proxy-client-ca.crt ...
	I0912 14:29:11.261686    1865 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/proxy-client-ca.crt: {Name:mk9cf18e475be95ea270df31e7aaf1d39d38763c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:29:11.261991    1865 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19616-1259/.minikube/proxy-client-ca.key ...
	I0912 14:29:11.261997    1865 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/proxy-client-ca.key: {Name:mke2c489038728660521db042e68ac84a99598dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:29:11.262123    1865 certs.go:256] generating profile certs ...
	I0912 14:29:11.262164    1865 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.key
	I0912 14:29:11.262172    1865 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt with IP's: []
	I0912 14:29:11.314311    1865 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt ...
	I0912 14:29:11.314315    1865 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt: {Name:mka32657a69ed02d5ae8c19d6e1418a0cfee4064 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:29:11.314450    1865 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.key ...
	I0912 14:29:11.314454    1865 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.key: {Name:mk6e7534f08996607bace12e064d9140988786b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:29:11.314563    1865 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/apiserver.key.17c4c0d1
	I0912 14:29:11.314575    1865 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/apiserver.crt.17c4c0d1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0912 14:29:11.424468    1865 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/apiserver.crt.17c4c0d1 ...
	I0912 14:29:11.424472    1865 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/apiserver.crt.17c4c0d1: {Name:mk3ed0cb2455d450cd6548f6587957855db72a73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:29:11.424627    1865 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/apiserver.key.17c4c0d1 ...
	I0912 14:29:11.424630    1865 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/apiserver.key.17c4c0d1: {Name:mk88c1c6aaf9a2029def1314e15a68f769bc6405 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:29:11.424742    1865 certs.go:381] copying /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/apiserver.crt.17c4c0d1 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/apiserver.crt
	I0912 14:29:11.424939    1865 certs.go:385] copying /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/apiserver.key.17c4c0d1 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/apiserver.key
	I0912 14:29:11.425084    1865 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/proxy-client.key
	I0912 14:29:11.425096    1865 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/proxy-client.crt with IP's: []
	I0912 14:29:11.657489    1865 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/proxy-client.crt ...
	I0912 14:29:11.657496    1865 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/proxy-client.crt: {Name:mk7da494d1c2654c07b88649e2779734c54eb17b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:29:11.657686    1865 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/proxy-client.key ...
	I0912 14:29:11.657689    1865 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/proxy-client.key: {Name:mka87d438b66e0aa8b9a971f4f4a41007550d8cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:29:11.657949    1865 certs.go:484] found cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 14:29:11.657989    1865 certs.go:484] found cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem (1078 bytes)
	I0912 14:29:11.658020    1865 certs.go:484] found cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem (1123 bytes)
	I0912 14:29:11.658046    1865 certs.go:484] found cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/key.pem (1675 bytes)
	I0912 14:29:11.658563    1865 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 14:29:11.669167    1865 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 14:29:11.681514    1865 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 14:29:11.690407    1865 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 14:29:11.698741    1865 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0912 14:29:11.707058    1865 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 14:29:11.715302    1865 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 14:29:11.723436    1865 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0912 14:29:11.731738    1865 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 14:29:11.739966    1865 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 14:29:11.746603    1865 ssh_runner.go:195] Run: openssl version
	I0912 14:29:11.749193    1865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 14:29:11.753239    1865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 14:29:11.754674    1865 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:29 /usr/share/ca-certificates/minikubeCA.pem
	I0912 14:29:11.754700    1865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 14:29:11.756826    1865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 14:29:11.760586    1865 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 14:29:11.761984    1865 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0912 14:29:11.762026    1865 kubeadm.go:392] StartCluster: {Name:addons-094000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-094000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 14:29:11.762091    1865 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0912 14:29:11.767749    1865 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 14:29:11.771710    1865 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 14:29:11.775256    1865 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 14:29:11.778877    1865 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 14:29:11.778884    1865 kubeadm.go:157] found existing configuration files:
	
	I0912 14:29:11.778906    1865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 14:29:11.782370    1865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 14:29:11.782400    1865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 14:29:11.785705    1865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 14:29:11.788680    1865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 14:29:11.788700    1865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 14:29:11.792021    1865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 14:29:11.795465    1865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 14:29:11.795484    1865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 14:29:11.799343    1865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 14:29:11.802839    1865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 14:29:11.802864    1865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 14:29:11.806538    1865 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 14:29:11.829076    1865 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0912 14:29:11.829106    1865 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 14:29:11.870537    1865 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 14:29:11.870610    1865 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 14:29:11.870687    1865 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0912 14:29:11.874829    1865 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 14:29:11.883032    1865 out.go:235]   - Generating certificates and keys ...
	I0912 14:29:11.883067    1865 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 14:29:11.883097    1865 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 14:29:12.084609    1865 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0912 14:29:12.272051    1865 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0912 14:29:12.312762    1865 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0912 14:29:12.394544    1865 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0912 14:29:12.490562    1865 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0912 14:29:12.490640    1865 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-094000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0912 14:29:12.678128    1865 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0912 14:29:12.678198    1865 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-094000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0912 14:29:12.918638    1865 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0912 14:29:12.974095    1865 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0912 14:29:13.026907    1865 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0912 14:29:13.026942    1865 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 14:29:13.127768    1865 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 14:29:13.164687    1865 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0912 14:29:13.292972    1865 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 14:29:13.363155    1865 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 14:29:13.529103    1865 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 14:29:13.529259    1865 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 14:29:13.530496    1865 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 14:29:13.533836    1865 out.go:235]   - Booting up control plane ...
	I0912 14:29:13.533884    1865 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 14:29:13.533936    1865 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 14:29:13.533970    1865 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 14:29:13.538478    1865 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 14:29:13.542323    1865 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 14:29:13.542349    1865 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 14:29:13.627129    1865 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0912 14:29:13.627191    1865 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0912 14:29:14.137243    1865 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 509.952709ms
	I0912 14:29:14.137312    1865 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0912 14:29:17.640849    1865 kubeadm.go:310] [api-check] The API server is healthy after 3.503448544s
	I0912 14:29:17.666264    1865 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 14:29:17.677201    1865 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 14:29:17.693293    1865 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 14:29:17.693502    1865 kubeadm.go:310] [mark-control-plane] Marking the node addons-094000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0912 14:29:17.699742    1865 kubeadm.go:310] [bootstrap-token] Using token: xgnzgx.wk74o5l5ix4zgsac
	I0912 14:29:17.705987    1865 out.go:235]   - Configuring RBAC rules ...
	I0912 14:29:17.706078    1865 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 14:29:17.707186    1865 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 14:29:17.714282    1865 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 14:29:17.715685    1865 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 14:29:17.716974    1865 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 14:29:17.718542    1865 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 14:29:18.047024    1865 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 14:29:18.455163    1865 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0912 14:29:19.052171    1865 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0912 14:29:19.052547    1865 kubeadm.go:310] 
	I0912 14:29:19.052574    1865 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0912 14:29:19.052578    1865 kubeadm.go:310] 
	I0912 14:29:19.052622    1865 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0912 14:29:19.052626    1865 kubeadm.go:310] 
	I0912 14:29:19.052637    1865 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0912 14:29:19.052666    1865 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 14:29:19.052692    1865 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 14:29:19.052694    1865 kubeadm.go:310] 
	I0912 14:29:19.052718    1865 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0912 14:29:19.052722    1865 kubeadm.go:310] 
	I0912 14:29:19.052744    1865 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0912 14:29:19.052747    1865 kubeadm.go:310] 
	I0912 14:29:19.052771    1865 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0912 14:29:19.052820    1865 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 14:29:19.052854    1865 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 14:29:19.052857    1865 kubeadm.go:310] 
	I0912 14:29:19.052895    1865 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 14:29:19.052936    1865 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0912 14:29:19.052941    1865 kubeadm.go:310] 
	I0912 14:29:19.052983    1865 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xgnzgx.wk74o5l5ix4zgsac \
	I0912 14:29:19.053048    1865 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:927739ba46076d32ef09500def7ebaf4576e192a933c1b27a78721d37c8894ab \
	I0912 14:29:19.053059    1865 kubeadm.go:310] 	--control-plane 
	I0912 14:29:19.053063    1865 kubeadm.go:310] 
	I0912 14:29:19.053105    1865 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0912 14:29:19.053108    1865 kubeadm.go:310] 
	I0912 14:29:19.053145    1865 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xgnzgx.wk74o5l5ix4zgsac \
	I0912 14:29:19.053218    1865 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:927739ba46076d32ef09500def7ebaf4576e192a933c1b27a78721d37c8894ab 
	I0912 14:29:19.053348    1865 kubeadm.go:310] W0912 21:29:12.168534    1597 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 14:29:19.053505    1865 kubeadm.go:310] W0912 21:29:12.168945    1597 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 14:29:19.053566    1865 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 14:29:19.053575    1865 cni.go:84] Creating CNI manager for ""
	I0912 14:29:19.053583    1865 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 14:29:19.060985    1865 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 14:29:19.065185    1865 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 14:29:19.069075    1865 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 14:29:19.075301    1865 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 14:29:19.075352    1865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:29:19.075413    1865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-094000 minikube.k8s.io/updated_at=2024_09_12T14_29_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8 minikube.k8s.io/name=addons-094000 minikube.k8s.io/primary=true
	I0912 14:29:19.079239    1865 ops.go:34] apiserver oom_adj: -16
	I0912 14:29:19.136788    1865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:29:19.638936    1865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:29:20.138866    1865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:29:20.638865    1865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:29:21.138453    1865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:29:21.639015    1865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:29:22.138850    1865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:29:22.636868    1865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:29:23.138910    1865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:29:23.638729    1865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:29:23.679559    1865 kubeadm.go:1113] duration metric: took 4.604382167s to wait for elevateKubeSystemPrivileges
	I0912 14:29:23.679576    1865 kubeadm.go:394] duration metric: took 11.917897833s to StartCluster
	I0912 14:29:23.679585    1865 settings.go:142] acquiring lock: {Name:mk5a46170b8bd524e48b63472100abbce9e9531f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:29:23.679753    1865 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 14:29:23.679979    1865 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/kubeconfig: {Name:mk048c749582c7be36b3ac030be68b87cf483b70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:29:23.680210    1865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0912 14:29:23.680231    1865 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 14:29:23.680276    1865 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0912 14:29:23.680321    1865 addons.go:69] Setting yakd=true in profile "addons-094000"
	I0912 14:29:23.680328    1865 config.go:182] Loaded profile config "addons-094000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 14:29:23.680333    1865 addons.go:234] Setting addon yakd=true in "addons-094000"
	I0912 14:29:23.680346    1865 host.go:66] Checking if "addons-094000" exists ...
	I0912 14:29:23.680359    1865 addons.go:69] Setting storage-provisioner=true in profile "addons-094000"
	I0912 14:29:23.680367    1865 addons.go:234] Setting addon storage-provisioner=true in "addons-094000"
	I0912 14:29:23.680377    1865 host.go:66] Checking if "addons-094000" exists ...
	I0912 14:29:23.680391    1865 addons.go:69] Setting inspektor-gadget=true in profile "addons-094000"
	I0912 14:29:23.680404    1865 addons.go:234] Setting addon inspektor-gadget=true in "addons-094000"
	I0912 14:29:23.680417    1865 host.go:66] Checking if "addons-094000" exists ...
	I0912 14:29:23.680427    1865 addons.go:69] Setting volumesnapshots=true in profile "addons-094000"
	I0912 14:29:23.680433    1865 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-094000"
	I0912 14:29:23.680464    1865 addons.go:234] Setting addon volumesnapshots=true in "addons-094000"
	I0912 14:29:23.680472    1865 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-094000"
	I0912 14:29:23.680497    1865 host.go:66] Checking if "addons-094000" exists ...
	I0912 14:29:23.680506    1865 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-094000"
	I0912 14:29:23.680466    1865 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-094000"
	I0912 14:29:23.680554    1865 host.go:66] Checking if "addons-094000" exists ...
	I0912 14:29:23.680653    1865 addons.go:69] Setting registry=true in profile "addons-094000"
	I0912 14:29:23.680662    1865 addons.go:234] Setting addon registry=true in "addons-094000"
	I0912 14:29:23.680685    1865 addons.go:69] Setting default-storageclass=true in profile "addons-094000"
	I0912 14:29:23.680687    1865 host.go:66] Checking if "addons-094000" exists ...
	I0912 14:29:23.680689    1865 addons.go:69] Setting gcp-auth=true in profile "addons-094000"
	I0912 14:29:23.680692    1865 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-094000"
	I0912 14:29:23.680700    1865 mustload.go:65] Loading cluster: addons-094000
	I0912 14:29:23.680711    1865 retry.go:31] will retry after 564.195435ms: connect: dial unix /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/monitor: connect: connection refused
	I0912 14:29:23.680734    1865 retry.go:31] will retry after 781.719587ms: connect: dial unix /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/monitor: connect: connection refused
	I0912 14:29:23.680746    1865 retry.go:31] will retry after 752.499812ms: connect: dial unix /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/monitor: connect: connection refused
	I0912 14:29:23.680755    1865 addons.go:69] Setting cloud-spanner=true in profile "addons-094000"
	I0912 14:29:23.680759    1865 config.go:182] Loaded profile config "addons-094000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 14:29:23.680770    1865 addons.go:234] Setting addon cloud-spanner=true in "addons-094000"
	I0912 14:29:23.680789    1865 host.go:66] Checking if "addons-094000" exists ...
	I0912 14:29:23.680826    1865 retry.go:31] will retry after 1.476367645s: connect: dial unix /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/monitor: connect: connection refused
	I0912 14:29:23.680832    1865 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-094000"
	I0912 14:29:23.680844    1865 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-094000"
	I0912 14:29:23.680864    1865 host.go:66] Checking if "addons-094000" exists ...
	I0912 14:29:23.680904    1865 retry.go:31] will retry after 1.316333876s: connect: dial unix /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/monitor: connect: connection refused
	I0912 14:29:23.680910    1865 addons.go:69] Setting metrics-server=true in profile "addons-094000"
	I0912 14:29:23.680916    1865 addons.go:234] Setting addon metrics-server=true in "addons-094000"
	I0912 14:29:23.680922    1865 host.go:66] Checking if "addons-094000" exists ...
	I0912 14:29:23.680989    1865 retry.go:31] will retry after 534.593184ms: connect: dial unix /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/monitor: connect: connection refused
	I0912 14:29:23.680992    1865 addons.go:69] Setting ingress=true in profile "addons-094000"
	I0912 14:29:23.681013    1865 addons.go:69] Setting ingress-dns=true in profile "addons-094000"
	I0912 14:29:23.681021    1865 addons.go:234] Setting addon ingress-dns=true in "addons-094000"
	I0912 14:29:23.681028    1865 addons.go:234] Setting addon ingress=true in "addons-094000"
	I0912 14:29:23.681063    1865 retry.go:31] will retry after 569.222915ms: connect: dial unix /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/monitor: connect: connection refused
	I0912 14:29:23.680448    1865 addons.go:69] Setting volcano=true in profile "addons-094000"
	I0912 14:29:23.681076    1865 addons.go:234] Setting addon volcano=true in "addons-094000"
	I0912 14:29:23.681084    1865 host.go:66] Checking if "addons-094000" exists ...
	I0912 14:29:23.681090    1865 retry.go:31] will retry after 918.9339ms: connect: dial unix /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/monitor: connect: connection refused
	I0912 14:29:23.681095    1865 host.go:66] Checking if "addons-094000" exists ...
	I0912 14:29:23.681031    1865 host.go:66] Checking if "addons-094000" exists ...
	I0912 14:29:23.681112    1865 retry.go:31] will retry after 758.865566ms: connect: dial unix /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/monitor: connect: connection refused
	I0912 14:29:23.681230    1865 retry.go:31] will retry after 523.501094ms: connect: dial unix /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/monitor: connect: connection refused
	I0912 14:29:23.681276    1865 retry.go:31] will retry after 516.708131ms: connect: dial unix /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/monitor: connect: connection refused
	I0912 14:29:23.681343    1865 retry.go:31] will retry after 1.152800392s: connect: dial unix /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/monitor: connect: connection refused
	I0912 14:29:23.681349    1865 retry.go:31] will retry after 841.283324ms: connect: dial unix /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/monitor: connect: connection refused
	I0912 14:29:23.683550    1865 out.go:177] * Verifying Kubernetes components...
	I0912 14:29:23.683950    1865 host.go:66] Checking if "addons-094000" exists ...
	I0912 14:29:23.688698    1865 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 14:29:23.692606    1865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 14:29:23.696602    1865 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 14:29:23.696610    1865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 14:29:23.696619    1865 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/id_rsa Username:docker}
	I0912 14:29:23.739375    1865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0912 14:29:23.784075    1865 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 14:29:23.858930    1865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 14:29:23.921866    1865 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0912 14:29:23.922336    1865 node_ready.go:35] waiting up to 6m0s for node "addons-094000" to be "Ready" ...
	I0912 14:29:23.923918    1865 node_ready.go:49] node "addons-094000" has status "Ready":"True"
	I0912 14:29:23.923926    1865 node_ready.go:38] duration metric: took 1.580458ms for node "addons-094000" to be "Ready" ...
	I0912 14:29:23.923930    1865 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 14:29:23.929292    1865 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jzjzs" in "kube-system" namespace to be "Ready" ...
	I0912 14:29:24.233103    1865 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0912 14:29:24.239881    1865 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0912 14:29:24.239885    1865 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0912 14:29:24.242980    1865 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0912 14:29:24.246246    1865 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-094000"
	I0912 14:29:24.246263    1865 host.go:66] Checking if "addons-094000" exists ...
	I0912 14:29:24.246997    1865 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0912 14:29:24.246998    1865 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0912 14:29:24.247000    1865 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0912 14:29:24.251208    1865 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0912 14:29:24.251218    1865 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/id_rsa Username:docker}
	I0912 14:29:24.250930    1865 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0912 14:29:24.255411    1865 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0912 14:29:24.255419    1865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0912 14:29:24.255427    1865 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/id_rsa Username:docker}
	I0912 14:29:24.262986    1865 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0912 14:29:24.266064    1865 out.go:177]   - Using image docker.io/busybox:stable
	I0912 14:29:24.266095    1865 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0912 14:29:24.269025    1865 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0912 14:29:24.269162    1865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0912 14:29:24.269179    1865 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/id_rsa Username:docker}
	I0912 14:29:24.273033    1865 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0912 14:29:24.273041    1865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0912 14:29:24.273047    1865 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/id_rsa Username:docker}
	I0912 14:29:24.281020    1865 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0912 14:29:24.287929    1865 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0912 14:29:24.294952    1865 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0912 14:29:24.300820    1865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0912 14:29:24.301841    1865 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0912 14:29:24.308182    1865 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0912 14:29:24.308193    1865 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0912 14:29:24.310029    1865 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0912 14:29:24.312987    1865 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0912 14:29:24.312996    1865 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0912 14:29:24.313007    1865 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/id_rsa Username:docker}
	I0912 14:29:24.321406    1865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0912 14:29:24.356717    1865 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0912 14:29:24.356731    1865 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0912 14:29:24.369224    1865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0912 14:29:24.429730    1865 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-094000" context rescaled to 1 replicas
	I0912 14:29:24.443975    1865 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0912 14:29:24.448046    1865 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0912 14:29:24.448056    1865 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0912 14:29:24.448068    1865 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/id_rsa Username:docker}
	I0912 14:29:24.448352    1865 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0912 14:29:24.448359    1865 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0912 14:29:24.453010    1865 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0912 14:29:24.457045    1865 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 14:29:24.457056    1865 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 14:29:24.457067    1865 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/id_rsa Username:docker}
	I0912 14:29:24.467993    1865 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0912 14:29:24.471984    1865 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0912 14:29:24.471992    1865 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0912 14:29:24.472002    1865 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/id_rsa Username:docker}
	I0912 14:29:24.472961    1865 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0912 14:29:24.472966    1865 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0912 14:29:24.495134    1865 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0912 14:29:24.495145    1865 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0912 14:29:24.529024    1865 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0912 14:29:24.533024    1865 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0912 14:29:24.533032    1865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0912 14:29:24.533042    1865 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/id_rsa Username:docker}
	I0912 14:29:24.540319    1865 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0912 14:29:24.540331    1865 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0912 14:29:24.550579    1865 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0912 14:29:24.550590    1865 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0912 14:29:24.603991    1865 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0912 14:29:24.607005    1865 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0912 14:29:24.607013    1865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0912 14:29:24.607024    1865 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/id_rsa Username:docker}
	I0912 14:29:24.607273    1865 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0912 14:29:24.607282    1865 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0912 14:29:24.609179    1865 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 14:29:24.609186    1865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0912 14:29:24.617076    1865 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0912 14:29:24.617091    1865 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0912 14:29:24.627724    1865 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 14:29:24.627733    1865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0912 14:29:24.645719    1865 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0912 14:29:24.645733    1865 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0912 14:29:24.648250    1865 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 14:29:24.648263    1865 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 14:29:24.655036    1865 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0912 14:29:24.655045    1865 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0912 14:29:24.655151    1865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 14:29:24.665819    1865 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0912 14:29:24.665834    1865 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0912 14:29:24.697049    1865 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 14:29:24.697062    1865 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 14:29:24.708082    1865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0912 14:29:24.712509    1865 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0912 14:29:24.712519    1865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0912 14:29:24.715836    1865 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0912 14:29:24.715842    1865 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0912 14:29:24.719039    1865 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0912 14:29:24.719047    1865 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0912 14:29:24.740862    1865 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0912 14:29:24.740878    1865 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0912 14:29:24.743723    1865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 14:29:24.747601    1865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0912 14:29:24.755684    1865 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0912 14:29:24.755695    1865 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0912 14:29:24.760869    1865 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0912 14:29:24.760880    1865 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0912 14:29:24.784395    1865 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0912 14:29:24.784405    1865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0912 14:29:24.788352    1865 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0912 14:29:24.788360    1865 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0912 14:29:24.788568    1865 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0912 14:29:24.788575    1865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0912 14:29:24.840788    1865 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0912 14:29:24.844888    1865 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 14:29:24.848863    1865 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 14:29:24.851878    1865 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0912 14:29:24.851886    1865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0912 14:29:24.851896    1865 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/id_rsa Username:docker}
	I0912 14:29:24.871946    1865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0912 14:29:24.872035    1865 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0912 14:29:24.872044    1865 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0912 14:29:24.874913    1865 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0912 14:29:24.874920    1865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0912 14:29:24.938339    1865 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0912 14:29:24.938349    1865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0912 14:29:25.003205    1865 out.go:177]   - Using image docker.io/registry:2.8.3
	I0912 14:29:25.010088    1865 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0912 14:29:25.014136    1865 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0912 14:29:25.014145    1865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0912 14:29:25.014155    1865 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/id_rsa Username:docker}
	I0912 14:29:25.014413    1865 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0912 14:29:25.014418    1865 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0912 14:29:25.036112    1865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0912 14:29:25.074187    1865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0912 14:29:25.104733    1865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0912 14:29:25.160150    1865 addons.go:234] Setting addon default-storageclass=true in "addons-094000"
	I0912 14:29:25.160173    1865 host.go:66] Checking if "addons-094000" exists ...
	I0912 14:29:25.160863    1865 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 14:29:25.160871    1865 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 14:29:25.160877    1865 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/id_rsa Username:docker}
	I0912 14:29:25.343426    1865 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0912 14:29:25.343438    1865 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0912 14:29:25.541771    1865 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0912 14:29:25.541782    1865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0912 14:29:25.597986    1865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 14:29:25.633548    1865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0912 14:29:25.932835    1865 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-jzjzs" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-jzjzs" not found
	I0912 14:29:25.932847    1865 pod_ready.go:82] duration metric: took 2.003601709s for pod "coredns-7c65d6cfc9-jzjzs" in "kube-system" namespace to be "Ready" ...
	E0912 14:29:25.932852    1865 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-jzjzs" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-jzjzs" not found
	I0912 14:29:25.932856    1865 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vj2r9" in "kube-system" namespace to be "Ready" ...
	I0912 14:29:28.006778    1865 pod_ready.go:103] pod "coredns-7c65d6cfc9-vj2r9" in "kube-system" namespace has status "Ready":"False"
	I0912 14:29:28.535135    1865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (4.234422125s)
	I0912 14:29:28.535148    1865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.213853875s)
	I0912 14:29:28.535182    1865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.166070292s)
	I0912 14:29:28.535224    1865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.880179291s)
	W0912 14:29:28.535238    1865 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0912 14:29:28.535254    1865 retry.go:31] will retry after 261.916588ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0912 14:29:28.535265    1865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.827283167s)
	I0912 14:29:28.535305    1865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.791681792s)
	I0912 14:29:28.535351    1865 addons.go:475] Verifying addon metrics-server=true in "addons-094000"
	I0912 14:29:28.535366    1865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.787865167s)
	I0912 14:29:28.535380    1865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.663531584s)
	I0912 14:29:28.535484    1865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.499460041s)
	I0912 14:29:28.540000    1865 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-094000 service yakd-dashboard -n yakd-dashboard
	
	I0912 14:29:28.797652    1865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 14:29:28.901973    1865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (3.797334333s)
	I0912 14:29:28.901993    1865 addons.go:475] Verifying addon ingress=true in "addons-094000"
	I0912 14:29:28.902000    1865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.304097875s)
	I0912 14:29:28.902092    1865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.827997584s)
	I0912 14:29:28.902104    1865 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-094000"
	I0912 14:29:28.902097    1865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.2686035s)
	I0912 14:29:28.902154    1865 addons.go:475] Verifying addon registry=true in "addons-094000"
	I0912 14:29:28.906083    1865 out.go:177] * Verifying ingress addon...
	I0912 14:29:28.914997    1865 out.go:177] * Verifying csi-hostpath-driver addon...
	I0912 14:29:28.922950    1865 out.go:177] * Verifying registry addon...
	I0912 14:29:28.929400    1865 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0912 14:29:28.932444    1865 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0912 14:29:28.935384    1865 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0912 14:29:28.939809    1865 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0912 14:29:28.940680    1865 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0912 14:29:28.940686    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:28.941081    1865 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0912 14:29:28.941087    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:28.948353    1865 pod_ready.go:93] pod "coredns-7c65d6cfc9-vj2r9" in "kube-system" namespace has status "Ready":"True"
	I0912 14:29:28.948363    1865 pod_ready.go:82] duration metric: took 3.015590958s for pod "coredns-7c65d6cfc9-vj2r9" in "kube-system" namespace to be "Ready" ...
	I0912 14:29:28.948370    1865 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-094000" in "kube-system" namespace to be "Ready" ...
	I0912 14:29:29.437504    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:29.453452    1865 pod_ready.go:93] pod "etcd-addons-094000" in "kube-system" namespace has status "Ready":"True"
	I0912 14:29:29.453463    1865 pod_ready.go:82] duration metric: took 505.105042ms for pod "etcd-addons-094000" in "kube-system" namespace to be "Ready" ...
	I0912 14:29:29.453469    1865 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-094000" in "kube-system" namespace to be "Ready" ...
	I0912 14:29:29.538218    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:29.934529    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:29.936954    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:30.434745    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:30.436697    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:30.694157    1865 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0912 14:29:30.694176    1865 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/id_rsa Username:docker}
	I0912 14:29:30.721345    1865 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0912 14:29:30.728101    1865 addons.go:234] Setting addon gcp-auth=true in "addons-094000"
	I0912 14:29:30.728127    1865 host.go:66] Checking if "addons-094000" exists ...
	I0912 14:29:30.728903    1865 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0912 14:29:30.728911    1865 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/addons-094000/id_rsa Username:docker}
	I0912 14:29:30.759430    1865 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 14:29:30.771024    1865 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0912 14:29:30.775342    1865 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0912 14:29:30.775348    1865 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0912 14:29:30.782319    1865 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0912 14:29:30.782326    1865 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0912 14:29:30.793662    1865 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0912 14:29:30.793669    1865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0912 14:29:30.804616    1865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0912 14:29:30.934485    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:30.936930    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:31.210901    1865 addons.go:475] Verifying addon gcp-auth=true in "addons-094000"
	I0912 14:29:31.264484    1865 out.go:177] * Verifying gcp-auth addon...
	I0912 14:29:31.273458    1865 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0912 14:29:31.274962    1865 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0912 14:29:31.434516    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:31.436828    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:31.457730    1865 pod_ready.go:103] pod "kube-apiserver-addons-094000" in "kube-system" namespace has status "Ready":"False"
	I0912 14:29:31.934382    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:31.936678    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:32.434694    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:32.436793    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:32.934878    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:32.936996    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:33.434367    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:33.436822    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:33.934338    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:33.936657    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:33.957564    1865 pod_ready.go:103] pod "kube-apiserver-addons-094000" in "kube-system" namespace has status "Ready":"False"
	I0912 14:29:34.458095    1865 pod_ready.go:93] pod "kube-apiserver-addons-094000" in "kube-system" namespace has status "Ready":"True"
	I0912 14:29:34.458109    1865 pod_ready.go:82] duration metric: took 5.004782s for pod "kube-apiserver-addons-094000" in "kube-system" namespace to be "Ready" ...
	I0912 14:29:34.458114    1865 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-094000" in "kube-system" namespace to be "Ready" ...
	I0912 14:29:34.460154    1865 pod_ready.go:93] pod "kube-controller-manager-addons-094000" in "kube-system" namespace has status "Ready":"True"
	I0912 14:29:34.460163    1865 pod_ready.go:82] duration metric: took 2.045625ms for pod "kube-controller-manager-addons-094000" in "kube-system" namespace to be "Ready" ...
	I0912 14:29:34.460167    1865 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vv56v" in "kube-system" namespace to be "Ready" ...
	I0912 14:29:34.462060    1865 pod_ready.go:93] pod "kube-proxy-vv56v" in "kube-system" namespace has status "Ready":"True"
	I0912 14:29:34.462069    1865 pod_ready.go:82] duration metric: took 1.898334ms for pod "kube-proxy-vv56v" in "kube-system" namespace to be "Ready" ...
	I0912 14:29:34.462073    1865 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-094000" in "kube-system" namespace to be "Ready" ...
	I0912 14:29:34.466509    1865 pod_ready.go:93] pod "kube-scheduler-addons-094000" in "kube-system" namespace has status "Ready":"True"
	I0912 14:29:34.466520    1865 pod_ready.go:82] duration metric: took 4.442875ms for pod "kube-scheduler-addons-094000" in "kube-system" namespace to be "Ready" ...
	I0912 14:29:34.466524    1865 pod_ready.go:39] duration metric: took 10.542896625s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 14:29:34.466538    1865 api_server.go:52] waiting for apiserver process to appear ...
	I0912 14:29:34.466596    1865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 14:29:34.473321    1865 api_server.go:72] duration metric: took 10.793392458s to wait for apiserver process to appear ...
	I0912 14:29:34.473332    1865 api_server.go:88] waiting for apiserver healthz status ...
	I0912 14:29:34.473340    1865 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0912 14:29:34.477257    1865 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0912 14:29:34.478065    1865 api_server.go:141] control plane version: v1.31.1
	I0912 14:29:34.478073    1865 api_server.go:131] duration metric: took 4.739083ms to wait for apiserver health ...
	I0912 14:29:34.478076    1865 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 14:29:34.533722    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:34.533939    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:34.536048    1865 system_pods.go:59] 17 kube-system pods found
	I0912 14:29:34.536054    1865 system_pods.go:61] "coredns-7c65d6cfc9-vj2r9" [98cfa77f-1c88-46db-967f-43edc1c3cd7a] Running
	I0912 14:29:34.536058    1865 system_pods.go:61] "csi-hostpath-attacher-0" [1bacd096-0ada-4927-8e97-f7f4305edf48] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0912 14:29:34.536064    1865 system_pods.go:61] "csi-hostpath-resizer-0" [57da50d3-51f7-4e26-a1b4-4f8a47885f44] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0912 14:29:34.536067    1865 system_pods.go:61] "csi-hostpathplugin-kc89b" [f6f7cdb2-0de4-418e-86a1-61f40d539e10] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0912 14:29:34.536069    1865 system_pods.go:61] "etcd-addons-094000" [c92eaa5d-2632-4961-b320-bef1f8b4161f] Running
	I0912 14:29:34.536072    1865 system_pods.go:61] "kube-apiserver-addons-094000" [0c1ba576-d537-4e5e-bd09-3f1656fe3e8a] Running
	I0912 14:29:34.536074    1865 system_pods.go:61] "kube-controller-manager-addons-094000" [8469aa0e-4d69-46b6-99f9-1a0956dceb76] Running
	I0912 14:29:34.536076    1865 system_pods.go:61] "kube-ingress-dns-minikube" [8277b4a6-0537-4126-9826-db38c49c189c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0912 14:29:34.536078    1865 system_pods.go:61] "kube-proxy-vv56v" [42e0d765-f927-42ca-98ad-c355fd7dc961] Running
	I0912 14:29:34.536080    1865 system_pods.go:61] "kube-scheduler-addons-094000" [96fd4a75-3e95-47b2-865a-ab33d615831f] Running
	I0912 14:29:34.536083    1865 system_pods.go:61] "metrics-server-84c5f94fbc-kwgtm" [15370a23-e77e-4961-8c9d-79e7c4de4ce9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 14:29:34.536085    1865 system_pods.go:61] "nvidia-device-plugin-daemonset-ccfkk" [0f414de0-66c3-4a20-adf0-75eebe79b9d9] Running
	I0912 14:29:34.536087    1865 system_pods.go:61] "registry-66c9cd494c-p5f26" [d42d83d3-de78-4a99-ab0d-4539040c1a33] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0912 14:29:34.536091    1865 system_pods.go:61] "registry-proxy-zh429" [21dafca9-6625-404e-9961-8c638a8f1694] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0912 14:29:34.536095    1865 system_pods.go:61] "snapshot-controller-56fcc65765-7dnm2" [82a2f59c-61f9-46b7-94bd-8db3908adefd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 14:29:34.536098    1865 system_pods.go:61] "snapshot-controller-56fcc65765-sqxrf" [075a3de8-2bec-44fd-918f-2a67ad5c2a21] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 14:29:34.536100    1865 system_pods.go:61] "storage-provisioner" [ab0d9c24-6010-4db8-a714-76425506a864] Running
	I0912 14:29:34.536103    1865 system_pods.go:74] duration metric: took 58.0255ms to wait for pod list to return data ...
	I0912 14:29:34.536107    1865 default_sa.go:34] waiting for default service account to be created ...
	I0912 14:29:34.537162    1865 default_sa.go:45] found service account: "default"
	I0912 14:29:34.537167    1865 default_sa.go:55] duration metric: took 1.056958ms for default service account to be created ...
	I0912 14:29:34.537170    1865 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 14:29:34.663188    1865 system_pods.go:86] 17 kube-system pods found
	I0912 14:29:34.663199    1865 system_pods.go:89] "coredns-7c65d6cfc9-vj2r9" [98cfa77f-1c88-46db-967f-43edc1c3cd7a] Running
	I0912 14:29:34.663205    1865 system_pods.go:89] "csi-hostpath-attacher-0" [1bacd096-0ada-4927-8e97-f7f4305edf48] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0912 14:29:34.663209    1865 system_pods.go:89] "csi-hostpath-resizer-0" [57da50d3-51f7-4e26-a1b4-4f8a47885f44] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0912 14:29:34.663213    1865 system_pods.go:89] "csi-hostpathplugin-kc89b" [f6f7cdb2-0de4-418e-86a1-61f40d539e10] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0912 14:29:34.663216    1865 system_pods.go:89] "etcd-addons-094000" [c92eaa5d-2632-4961-b320-bef1f8b4161f] Running
	I0912 14:29:34.663219    1865 system_pods.go:89] "kube-apiserver-addons-094000" [0c1ba576-d537-4e5e-bd09-3f1656fe3e8a] Running
	I0912 14:29:34.663222    1865 system_pods.go:89] "kube-controller-manager-addons-094000" [8469aa0e-4d69-46b6-99f9-1a0956dceb76] Running
	I0912 14:29:34.663227    1865 system_pods.go:89] "kube-ingress-dns-minikube" [8277b4a6-0537-4126-9826-db38c49c189c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0912 14:29:34.663229    1865 system_pods.go:89] "kube-proxy-vv56v" [42e0d765-f927-42ca-98ad-c355fd7dc961] Running
	I0912 14:29:34.663232    1865 system_pods.go:89] "kube-scheduler-addons-094000" [96fd4a75-3e95-47b2-865a-ab33d615831f] Running
	I0912 14:29:34.663236    1865 system_pods.go:89] "metrics-server-84c5f94fbc-kwgtm" [15370a23-e77e-4961-8c9d-79e7c4de4ce9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 14:29:34.663239    1865 system_pods.go:89] "nvidia-device-plugin-daemonset-ccfkk" [0f414de0-66c3-4a20-adf0-75eebe79b9d9] Running
	I0912 14:29:34.663243    1865 system_pods.go:89] "registry-66c9cd494c-p5f26" [d42d83d3-de78-4a99-ab0d-4539040c1a33] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0912 14:29:34.663246    1865 system_pods.go:89] "registry-proxy-zh429" [21dafca9-6625-404e-9961-8c638a8f1694] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0912 14:29:34.663255    1865 system_pods.go:89] "snapshot-controller-56fcc65765-7dnm2" [82a2f59c-61f9-46b7-94bd-8db3908adefd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 14:29:34.663260    1865 system_pods.go:89] "snapshot-controller-56fcc65765-sqxrf" [075a3de8-2bec-44fd-918f-2a67ad5c2a21] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 14:29:34.663262    1865 system_pods.go:89] "storage-provisioner" [ab0d9c24-6010-4db8-a714-76425506a864] Running
	I0912 14:29:34.663267    1865 system_pods.go:126] duration metric: took 126.09775ms to wait for k8s-apps to be running ...
	I0912 14:29:34.663272    1865 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 14:29:34.663350    1865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 14:29:34.670548    1865 system_svc.go:56] duration metric: took 7.273291ms WaitForService to wait for kubelet
	I0912 14:29:34.670559    1865 kubeadm.go:582] duration metric: took 10.990637166s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 14:29:34.670569    1865 node_conditions.go:102] verifying NodePressure condition ...
	I0912 14:29:34.859455    1865 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 14:29:34.859475    1865 node_conditions.go:123] node cpu capacity is 2
	I0912 14:29:34.859483    1865 node_conditions.go:105] duration metric: took 188.916584ms to run NodePressure ...
	I0912 14:29:34.859491    1865 start.go:241] waiting for startup goroutines ...
	I0912 14:29:34.934851    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:34.938176    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:35.435075    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:35.436667    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:35.935262    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:35.936862    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:36.434192    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:36.436573    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:36.934066    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:36.936623    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:37.434361    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:37.436939    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:37.934042    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:37.936851    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:38.434234    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:38.436641    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:38.934366    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:38.937686    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:39.434324    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:39.436753    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:39.934423    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:39.936697    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:40.434168    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:40.437580    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:40.936617    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:40.937545    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:41.434222    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:41.436369    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:41.934527    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:41.936411    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:42.434994    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:42.436936    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:42.934074    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:42.936511    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:43.433936    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:43.436488    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:43.937329    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:43.942933    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:44.434159    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:44.436567    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:44.935170    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:44.936948    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:45.434070    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:45.436536    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:45.934112    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:45.936462    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:46.433944    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:46.436502    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:46.933979    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:46.936653    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:47.433707    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:47.436730    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:47.934205    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:47.936687    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:48.434076    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:48.436215    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:48.933940    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:48.936344    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:49.574881    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:49.575033    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:49.933992    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:49.936497    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:50.437575    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:50.438205    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:50.933828    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:50.936303    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:51.434706    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:51.436590    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:51.933913    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:51.935932    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:52.433783    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:52.436320    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:53.147067    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:53.147327    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:53.435501    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:53.436937    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:53.935850    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:53.937008    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:54.433625    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:54.436041    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:54.934674    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:54.936579    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:55.433792    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:55.436034    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:55.933475    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:55.936349    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:56.434371    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:56.436154    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:56.933831    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:56.936053    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:57.434772    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:57.436517    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:57.934802    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:57.936650    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:58.433922    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:58.436388    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:58.933738    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:58.936142    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:59.433509    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:59.436166    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:29:59.933561    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:29:59.936076    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:30:00.433798    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:00.436117    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:30:00.933543    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:00.935942    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:30:01.433849    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:01.435883    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 14:30:01.936031    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:01.937010    1865 kapi.go:107] duration metric: took 33.0025875s to wait for kubernetes.io/minikube-addons=registry ...
	I0912 14:30:02.435530    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:02.934231    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:03.433509    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:03.933483    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:04.433468    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:04.934833    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:05.434261    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:05.933494    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:06.433091    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:06.933146    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:07.433266    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:07.933421    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:08.434602    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:08.933685    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:09.432930    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:09.933380    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:10.435519    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:10.933364    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:11.433184    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:11.933395    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:12.433124    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:12.933859    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:13.438824    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:13.933340    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:14.433540    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:14.933204    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:15.434770    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:15.933026    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:16.433183    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:16.933051    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:17.433149    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:17.933116    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:18.433453    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:18.932789    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:19.433256    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:19.934719    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:20.432931    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:20.932859    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:21.432915    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:22.003515    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:22.435174    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:22.936253    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:23.432959    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:23.932818    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:24.433170    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:24.938247    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 14:30:25.443715    1865 kapi.go:107] duration metric: took 56.512890583s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0912 14:30:50.931690    1865 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0912 14:30:50.931705    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:30:51.431461    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:30:51.932734    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:30:52.432983    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:30:52.930975    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:30:53.429515    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:30:53.775109    1865 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0912 14:30:53.775119    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:30:53.930965    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:30:54.276057    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:30:54.430673    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:30:54.775470    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:30:54.931276    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:30:55.279709    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:30:55.431899    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:30:55.776290    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:30:55.937272    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:30:56.276036    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:30:56.431021    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:30:56.775961    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:30:56.934558    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:30:57.279336    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:30:57.431528    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:30:57.776044    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:30:57.932404    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:30:58.279480    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:30:58.435701    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:30:58.775528    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:30:58.930519    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:30:59.275029    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:30:59.431612    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:30:59.774387    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:30:59.930777    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:00.278752    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:00.437891    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:00.775209    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:00.930879    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:01.277224    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:01.433048    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:01.776063    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:01.933877    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:02.274630    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:02.433375    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:02.775954    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:02.934647    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:03.278465    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:03.432875    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:03.775295    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:03.931085    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:04.276651    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:04.434345    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:04.775975    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:04.930558    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:05.274376    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:05.430756    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:05.774397    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:05.930894    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:06.274449    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:06.430148    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:06.775307    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:06.930657    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:07.279372    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:07.432348    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:07.775672    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:07.931378    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:08.279287    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:08.435784    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:08.774637    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:08.928950    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:09.275313    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:09.432630    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:09.777009    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:09.933167    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:10.280159    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:10.431122    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:10.775641    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:10.931544    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:11.278818    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:11.432143    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:11.780341    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:11.931403    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:12.280407    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:12.435880    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:12.779712    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:12.934793    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:13.279945    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:13.436625    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:13.776192    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:13.931495    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:14.277834    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:14.437611    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:14.776072    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:14.929455    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:15.273368    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:15.430874    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:15.774319    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:15.929286    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:16.274121    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:16.429756    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:16.774926    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:16.930730    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:17.275830    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:17.433530    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:17.777442    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:17.931549    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:18.279615    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:18.436432    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:18.774114    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:18.930132    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:19.275741    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:19.431833    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:19.776841    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:19.932647    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:20.278719    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:20.432943    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:20.776417    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:20.931425    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:21.280491    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:21.431635    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:21.778634    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:21.940623    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:22.277243    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:22.434156    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:22.776635    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:22.933251    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:23.275900    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:23.431637    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:23.773903    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:23.929701    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:24.279640    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:24.430888    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:24.776096    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:24.930126    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:25.279647    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:25.431664    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:25.775737    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:25.929363    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:26.274822    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:26.429844    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:26.775403    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:26.930570    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:27.275954    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:27.430908    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:27.774290    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:27.928663    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:28.278350    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:28.433638    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:28.774084    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:28.929985    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:29.274926    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:29.429724    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:29.774474    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:29.930095    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:30.276936    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:30.434589    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:30.773426    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:30.931167    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:31.285928    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:31.438556    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:31.774421    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:31.930413    1865 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0912 14:31:31.930422    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:32.273785    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:32.430041    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:32.773611    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:32.930146    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:33.273859    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:33.430933    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:33.773561    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:33.929759    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:34.273631    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:34.430238    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:34.773622    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:34.927950    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:35.273823    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:35.430151    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:35.774284    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:35.930128    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:36.273821    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:36.429892    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:36.775136    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:36.932179    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:37.273188    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:37.429863    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:37.773795    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:37.930342    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:38.273729    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:38.429989    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:38.772788    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:38.930699    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:39.273517    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:39.429322    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:39.773756    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:39.930993    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:40.274476    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:40.432092    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:40.777879    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:40.931752    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:41.274919    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:41.432172    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:41.776144    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:41.930619    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:42.275142    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:42.429645    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:42.774436    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:42.932649    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:43.285753    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:43.432733    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:43.775719    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:43.931009    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:44.277868    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:44.429990    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:44.773815    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:44.931132    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:45.276852    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:45.431635    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:45.774971    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:45.929300    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:46.273263    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:46.430027    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:46.774792    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:46.932167    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:47.274286    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:47.429675    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:47.774357    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:47.931323    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:48.277309    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:48.431243    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:48.773757    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:48.929856    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:49.273870    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:49.429971    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:49.774623    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:49.930517    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:50.279607    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:50.432492    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:50.774639    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:50.931015    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:51.277040    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:51.434188    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:51.773806    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:51.932903    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:52.278475    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:52.438558    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:52.775460    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:52.932695    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:53.276575    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:53.432830    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:53.773866    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:53.930164    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:54.277564    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:54.433477    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:54.773757    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:54.929354    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:55.273093    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:55.429526    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:55.772806    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:55.929533    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:56.273126    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:56.429233    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:56.773675    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:56.929343    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:57.273071    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:57.428594    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:57.773116    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:57.929705    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:58.273149    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:58.429621    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:58.771872    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:58.930546    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:59.272806    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:59.428946    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:31:59.771978    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:31:59.929329    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:32:00.272873    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:32:00.429278    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:32:00.772516    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:32:00.929146    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:32:01.272967    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:32:01.429144    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:32:01.772871    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:32:01.929423    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:32:02.275282    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:32:02.429811    1865 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 14:32:02.772979    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:32:02.929572    1865 kapi.go:107] duration metric: took 2m34.004659458s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0912 14:32:03.273058    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:32:03.772889    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:32:04.273128    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:32:04.772729    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:32:05.272728    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:32:05.772785    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:32:06.273305    1865 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 14:32:06.772747    1865 kapi.go:107] duration metric: took 2m35.503821666s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0912 14:32:06.777482    1865 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-094000 cluster.
	I0912 14:32:06.783391    1865 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0912 14:32:06.789422    1865 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0912 14:32:06.794351    1865 out.go:177] * Enabled addons: storage-provisioner, volcano, nvidia-device-plugin, ingress-dns, metrics-server, cloud-spanner, inspektor-gadget, yakd, storage-provisioner-rancher, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0912 14:32:06.798364    1865 addons.go:510] duration metric: took 2m43.12286225s for enable addons: enabled=[storage-provisioner volcano nvidia-device-plugin ingress-dns metrics-server cloud-spanner inspektor-gadget yakd storage-provisioner-rancher default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0912 14:32:06.798378    1865 start.go:246] waiting for cluster config update ...
	I0912 14:32:06.798386    1865 start.go:255] writing updated cluster config ...
	I0912 14:32:06.798815    1865 ssh_runner.go:195] Run: rm -f paused
	I0912 14:32:06.947505    1865 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0912 14:32:06.954427    1865 out.go:201] 
	W0912 14:32:06.958456    1865 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0912 14:32:06.961390    1865 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0912 14:32:06.969247    1865 out.go:177] * Done! kubectl is now configured to use "addons-094000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 12 21:41:56 addons-094000 dockerd[1289]: time="2024-09-12T21:41:56.803202947Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 12 21:41:58 addons-094000 dockerd[1282]: time="2024-09-12T21:41:58.866794089Z" level=info msg="ignoring event" container=9eda0bbc3d9a26910b1ba74c58ee34a54a1ed25461109155b9f3a8f24a9d3c03 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:41:58 addons-094000 dockerd[1289]: time="2024-09-12T21:41:58.867130557Z" level=info msg="shim disconnected" id=9eda0bbc3d9a26910b1ba74c58ee34a54a1ed25461109155b9f3a8f24a9d3c03 namespace=moby
	Sep 12 21:41:58 addons-094000 dockerd[1289]: time="2024-09-12T21:41:58.867164279Z" level=warning msg="cleaning up after shim disconnected" id=9eda0bbc3d9a26910b1ba74c58ee34a54a1ed25461109155b9f3a8f24a9d3c03 namespace=moby
	Sep 12 21:41:58 addons-094000 dockerd[1289]: time="2024-09-12T21:41:58.867169072Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 12 21:41:58 addons-094000 dockerd[1289]: time="2024-09-12T21:41:58.883288331Z" level=info msg="shim disconnected" id=0de37e2aa27a6cd86bbe8527ca713f09366dacdab7d5d62119b0f4da39eb0b3e namespace=moby
	Sep 12 21:41:58 addons-094000 dockerd[1289]: time="2024-09-12T21:41:58.883375116Z" level=warning msg="cleaning up after shim disconnected" id=0de37e2aa27a6cd86bbe8527ca713f09366dacdab7d5d62119b0f4da39eb0b3e namespace=moby
	Sep 12 21:41:58 addons-094000 dockerd[1289]: time="2024-09-12T21:41:58.883380493Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 12 21:41:58 addons-094000 dockerd[1282]: time="2024-09-12T21:41:58.883602791Z" level=info msg="ignoring event" container=0de37e2aa27a6cd86bbe8527ca713f09366dacdab7d5d62119b0f4da39eb0b3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:41:59 addons-094000 dockerd[1282]: time="2024-09-12T21:41:59.007013452Z" level=info msg="ignoring event" container=362f66c2bb1f9aa92147d4697719ad7c524b75f14cefc3fa5b0599b0bcbff7e9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:41:59 addons-094000 dockerd[1289]: time="2024-09-12T21:41:59.007157301Z" level=info msg="shim disconnected" id=362f66c2bb1f9aa92147d4697719ad7c524b75f14cefc3fa5b0599b0bcbff7e9 namespace=moby
	Sep 12 21:41:59 addons-094000 dockerd[1289]: time="2024-09-12T21:41:59.007186729Z" level=warning msg="cleaning up after shim disconnected" id=362f66c2bb1f9aa92147d4697719ad7c524b75f14cefc3fa5b0599b0bcbff7e9 namespace=moby
	Sep 12 21:41:59 addons-094000 dockerd[1289]: time="2024-09-12T21:41:59.007190647Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 12 21:41:59 addons-094000 dockerd[1282]: time="2024-09-12T21:41:59.053099019Z" level=info msg="ignoring event" container=1e6d25b95e114794617ce05debe2ba72aa9b7951adecab064f89ad0533fc2b7d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:41:59 addons-094000 dockerd[1289]: time="2024-09-12T21:41:59.053689754Z" level=info msg="shim disconnected" id=1e6d25b95e114794617ce05debe2ba72aa9b7951adecab064f89ad0533fc2b7d namespace=moby
	Sep 12 21:41:59 addons-094000 dockerd[1289]: time="2024-09-12T21:41:59.053723309Z" level=warning msg="cleaning up after shim disconnected" id=1e6d25b95e114794617ce05debe2ba72aa9b7951adecab064f89ad0533fc2b7d namespace=moby
	Sep 12 21:41:59 addons-094000 dockerd[1289]: time="2024-09-12T21:41:59.053727602Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 12 21:41:59 addons-094000 dockerd[1282]: time="2024-09-12T21:41:59.134552978Z" level=info msg="ignoring event" container=46cd3f7bc3bc0a00acb08546871702d180d9020bed46309fadd5d224e1bd0032 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:41:59 addons-094000 dockerd[1289]: time="2024-09-12T21:41:59.134632634Z" level=info msg="shim disconnected" id=46cd3f7bc3bc0a00acb08546871702d180d9020bed46309fadd5d224e1bd0032 namespace=moby
	Sep 12 21:41:59 addons-094000 dockerd[1289]: time="2024-09-12T21:41:59.134678945Z" level=warning msg="cleaning up after shim disconnected" id=46cd3f7bc3bc0a00acb08546871702d180d9020bed46309fadd5d224e1bd0032 namespace=moby
	Sep 12 21:41:59 addons-094000 dockerd[1289]: time="2024-09-12T21:41:59.134684113Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 12 21:41:59 addons-094000 dockerd[1282]: time="2024-09-12T21:41:59.162262366Z" level=info msg="ignoring event" container=5c030c2418ad5a3a3934d2134f53828faf1ce5066de13292d9f6439a7751840e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:41:59 addons-094000 dockerd[1289]: time="2024-09-12T21:41:59.162394002Z" level=info msg="shim disconnected" id=5c030c2418ad5a3a3934d2134f53828faf1ce5066de13292d9f6439a7751840e namespace=moby
	Sep 12 21:41:59 addons-094000 dockerd[1289]: time="2024-09-12T21:41:59.162426640Z" level=warning msg="cleaning up after shim disconnected" id=5c030c2418ad5a3a3934d2134f53828faf1ce5066de13292d9f6439a7751840e namespace=moby
	Sep 12 21:41:59 addons-094000 dockerd[1289]: time="2024-09-12T21:41:59.162430850Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f07f6b608e189       busybox@sha256:34b191d63fbc93e25e275bfccf1b5365664e5ac28f06d974e8d50090fbb49f41                                              3 seconds ago       Exited              busybox                   0                   0de37e2aa27a6       test-local-path
	a1709138dedd7       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                              8 seconds ago       Exited              helper-pod                0                   66b6373b9e929       helper-pod-create-pvc-ccf8019f-9494-470c-adb1-2ca3643c7b43
	422c468244be1       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            57 seconds ago      Exited              gadget                    7                   93ef49f3b6b23       gadget-9wp6w
	5494ecbe6cb1e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   aa7da7822a262       gcp-auth-89d5ffd79-qhxwk
	64b8bb357bba9       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             9 minutes ago       Running             controller                0                   b900761774853       ingress-nginx-controller-bc57996ff-9fp2z
	1637715b43b3f       420193b27261a                                                                                                                10 minutes ago      Exited              patch                     1                   f4a010755f75a       ingress-nginx-admission-patch-trc86
	e2b59cbcfe7c8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   10 minutes ago      Exited              create                    0                   8f3484aab5fe4       ingress-nginx-admission-create-gg68k
	1e6d25b95e114       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              11 minutes ago      Exited              registry-proxy            0                   5c030c2418ad5       registry-proxy-zh429
	362f66c2bb1f9       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                             12 minutes ago      Exited              registry                  0                   46cd3f7bc3bc0       registry-66c9cd494c-p5f26
	f38c428cb886b       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9        12 minutes ago      Running             metrics-server            0                   36dbc5017a5c5       metrics-server-84c5f94fbc-kwgtm
	98950f9de9f1b       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner    0                   85dbda8f3725c       local-path-provisioner-86d989889c-z5lqx
	2a2461e0db7f8       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns      0                   95a1fc0490c4e       kube-ingress-dns-minikube
	037c89f1d1a99       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago      Running             cloud-spanner-emulator    0                   546f0df91dac6       cloud-spanner-emulator-769b77f747-jmz4x
	01a23a6d9162c       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner       0                   6f7b4fb1d7cec       storage-provisioner
	cd8e82f41fc6c       2f6c962e7b831                                                                                                                12 minutes ago      Running             coredns                   0                   7c6c5f2e6046a       coredns-7c65d6cfc9-vj2r9
	3ed42e46d3da7       24a140c548c07                                                                                                                12 minutes ago      Running             kube-proxy                0                   da4c5b29816f9       kube-proxy-vv56v
	9ce52093e2b23       d3f53a98c0a9d                                                                                                                12 minutes ago      Running             kube-apiserver            0                   be33963653595       kube-apiserver-addons-094000
	44fe2d9010037       7f8aa378bb47d                                                                                                                12 minutes ago      Running             kube-scheduler            0                   4a992641939c7       kube-scheduler-addons-094000
	2240bd8077091       279f381cb3736                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   e41c1291d43be       kube-controller-manager-addons-094000
	3d70acdb463a5       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                      0                   4872ca30988cf       etcd-addons-094000
	
	
	==> controller_ingress [64b8bb357bba] <==
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	W0912 21:32:02.523587       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0912 21:32:02.523699       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0912 21:32:02.527476       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
	I0912 21:32:02.610216       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0912 21:32:02.626076       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0912 21:32:02.633000       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0912 21:32:02.642835       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"e64fefb8-d66d-4e65-a5ea-e652ebc27a97", APIVersion:"v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0912 21:32:02.649877       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"c45dd024-deba-4268-871e-8eec335d1804", APIVersion:"v1", ResourceVersion:"703", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0912 21:32:02.649899       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"1f003956-1b94-46b3-9e04-378ae6d68bfc", APIVersion:"v1", ResourceVersion:"710", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0912 21:32:03.835246       7 nginx.go:317] "Starting NGINX process"
	I0912 21:32:03.835401       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0912 21:32:03.835415       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0912 21:32:03.835583       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0912 21:32:03.847217       7 controller.go:213] "Backend successfully reloaded"
	I0912 21:32:03.847311       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0912 21:32:03.847509       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-9fp2z", UID:"a7d2b926-806c-4cff-bf63-3475ac219413", APIVersion:"v1", ResourceVersion:"1228", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0912 21:32:03.851287       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0912 21:32:03.851465       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-9fp2z"
	I0912 21:32:03.913866       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-9fp2z" node="addons-094000"
	
	
	==> coredns [cd8e82f41fc6] <==
	[INFO] 127.0.0.1:58733 - 22425 "HINFO IN 8145035439525199814.4139738628472736569. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.0260525s
	[INFO] 10.244.0.9:48701 - 18344 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000104254s
	[INFO] 10.244.0.9:48701 - 25262 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000027229s
	[INFO] 10.244.0.9:39897 - 43721 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00005729s
	[INFO] 10.244.0.9:39897 - 58063 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000026314s
	[INFO] 10.244.0.9:53529 - 32848 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000033724s
	[INFO] 10.244.0.9:53529 - 15439 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000044133s
	[INFO] 10.244.0.9:34190 - 53281 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000031851s
	[INFO] 10.244.0.9:34190 - 33570 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000036723s
	[INFO] 10.244.0.9:38980 - 51767 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000044008s
	[INFO] 10.244.0.9:38980 - 60725 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000025397s
	[INFO] 10.244.0.9:32933 - 1307 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000011741s
	[INFO] 10.244.0.9:32933 - 3866 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000027563s
	[INFO] 10.244.0.9:59615 - 47234 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00001707s
	[INFO] 10.244.0.9:59615 - 45187 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000011117s
	[INFO] 10.244.0.9:51719 - 44874 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000011991s
	[INFO] 10.244.0.9:51719 - 64069 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000026688s
	[INFO] 10.244.0.24:40516 - 20267 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.006302104s
	[INFO] 10.244.0.24:60441 - 11197 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.006345214s
	[INFO] 10.244.0.24:50491 - 27632 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000753224s
	[INFO] 10.244.0.24:55870 - 63493 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000793084s
	[INFO] 10.244.0.24:48098 - 47692 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000039819s
	[INFO] 10.244.0.24:47311 - 33624 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000094798s
	[INFO] 10.244.0.24:39572 - 39697 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.004893246s
	[INFO] 10.244.0.24:44489 - 2492 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.005013786s
	
	
	==> describe nodes <==
	Name:               addons-094000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-094000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=addons-094000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_12T14_29_19_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-094000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 21:29:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-094000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 21:41:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 21:37:58 +0000   Thu, 12 Sep 2024 21:29:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 21:37:58 +0000   Thu, 12 Sep 2024 21:29:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 21:37:58 +0000   Thu, 12 Sep 2024 21:29:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 21:37:58 +0000   Thu, 12 Sep 2024 21:29:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-094000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904752Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904752Ki
	  pods:               110
	System Info:
	  Machine ID:                 9aa957a966124f9bb4e0b774a47d8ca1
	  System UUID:                9aa957a966124f9bb4e0b774a47d8ca1
	  Boot ID:                    80b2124d-68fb-4e81-ae86-fa217dadf068
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  default                     cloud-spanner-emulator-769b77f747-jmz4x     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     registry-test                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  gadget                      gadget-9wp6w                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-qhxwk                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-9fp2z    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-vj2r9                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-094000                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-094000                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-094000       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-vv56v                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-094000                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-kwgtm             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-z5lqx     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node addons-094000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node addons-094000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node addons-094000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m   kubelet          Node addons-094000 status is now: NodeReady
	  Normal  RegisteredNode           12m   node-controller  Node addons-094000 event: Registered Node addons-094000 in Controller
	
	
	==> dmesg <==
	[ +15.142222] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.192870] kauditd_printk_skb: 9 callbacks suppressed
	[Sep12 21:30] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.778025] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.077742] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.899882] kauditd_printk_skb: 26 callbacks suppressed
	[Sep12 21:31] kauditd_printk_skb: 21 callbacks suppressed
	[ +27.085751] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.211873] kauditd_printk_skb: 68 callbacks suppressed
	[  +8.649018] kauditd_printk_skb: 18 callbacks suppressed
	[  +9.006137] kauditd_printk_skb: 2 callbacks suppressed
	[Sep12 21:32] kauditd_printk_skb: 22 callbacks suppressed
	[ +12.355044] kauditd_printk_skb: 6 callbacks suppressed
	[ +22.681304] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.289149] kauditd_printk_skb: 20 callbacks suppressed
	[Sep12 21:33] kauditd_printk_skb: 2 callbacks suppressed
	[Sep12 21:35] kauditd_printk_skb: 10 callbacks suppressed
	[Sep12 21:40] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.915245] kauditd_printk_skb: 7 callbacks suppressed
	[Sep12 21:41] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.759152] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.625921] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.375337] kauditd_printk_skb: 6 callbacks suppressed
	[ +11.369851] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.446530] kauditd_printk_skb: 23 callbacks suppressed
	
	
	==> etcd [3d70acdb463a] <==
	{"level":"info","ts":"2024-09-12T21:29:15.038955Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"warn","ts":"2024-09-12T21:29:34.575191Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.241451ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:29:34.575255Z","caller":"traceutil/trace.go:171","msg":"trace[306247226] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:925; }","duration":"151.324617ms","start":"2024-09-12T21:29:34.423923Z","end":"2024-09-12T21:29:34.575248Z","steps":["trace[306247226] 'range keys from in-memory index tree'  (duration: 151.212903ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:29:34.575192Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.312576ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/gcp-auth/gcp-auth-certs-create.17f49c7226e2c032\" ","response":"range_response_count:1 size:917"}
	{"level":"info","ts":"2024-09-12T21:29:34.575456Z","caller":"traceutil/trace.go:171","msg":"trace[974931706] range","detail":"{range_begin:/registry/events/gcp-auth/gcp-auth-certs-create.17f49c7226e2c032; range_end:; response_count:1; response_revision:925; }","duration":"184.599039ms","start":"2024-09-12T21:29:34.390853Z","end":"2024-09-12T21:29:34.575452Z","steps":["trace[974931706] 'range keys from in-memory index tree'  (duration: 184.255688ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:29:49.649846Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.696825ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:29:49.649912Z","caller":"traceutil/trace.go:171","msg":"trace[884296111] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:988; }","duration":"139.750479ms","start":"2024-09-12T21:29:49.510134Z","end":"2024-09-12T21:29:49.649885Z","steps":["trace[884296111] 'range keys from in-memory index tree'  (duration: 139.640591ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:29:49.650000Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.487346ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:29:49.650008Z","caller":"traceutil/trace.go:171","msg":"trace[434955050] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:988; }","duration":"138.496788ms","start":"2024-09-12T21:29:49.511508Z","end":"2024-09-12T21:29:49.650005Z","steps":["trace[434955050] 'range keys from in-memory index tree'  (duration: 138.453281ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:29:49.650037Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.380691ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:29:49.650045Z","caller":"traceutil/trace.go:171","msg":"trace[1584951466] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:988; }","duration":"135.387096ms","start":"2024-09-12T21:29:49.514654Z","end":"2024-09-12T21:29:49.650041Z","steps":["trace[1584951466] 'range keys from in-memory index tree'  (duration: 135.369211ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:29:49.650074Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.490021ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:29:49.650079Z","caller":"traceutil/trace.go:171","msg":"trace[989290022] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:988; }","duration":"134.497383ms","start":"2024-09-12T21:29:49.515580Z","end":"2024-09-12T21:29:49.650078Z","steps":["trace[989290022] 'range keys from in-memory index tree'  (duration: 134.488648ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:29:53.217266Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"213.152563ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:29:53.217364Z","caller":"traceutil/trace.go:171","msg":"trace[582653800] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:997; }","duration":"213.25739ms","start":"2024-09-12T21:29:53.004101Z","end":"2024-09-12T21:29:53.217358Z","steps":["trace[582653800] 'range keys from in-memory index tree'  (duration: 213.12227ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:29:53.217442Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"210.907294ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:29:53.217464Z","caller":"traceutil/trace.go:171","msg":"trace[1672822008] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:997; }","duration":"210.929808ms","start":"2024-09-12T21:29:53.006532Z","end":"2024-09-12T21:29:53.217462Z","steps":["trace[1672822008] 'range keys from in-memory index tree'  (duration: 210.871547ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:29:53.217510Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"208.276907ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:29:53.217530Z","caller":"traceutil/trace.go:171","msg":"trace[548058198] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:997; }","duration":"208.297215ms","start":"2024-09-12T21:29:53.009231Z","end":"2024-09-12T21:29:53.217528Z","steps":["trace[548058198] 'range keys from in-memory index tree'  (duration: 208.260261ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:29:53.221585Z","caller":"traceutil/trace.go:171","msg":"trace[27562101] linearizableReadLoop","detail":"{readStateIndex:1022; appliedIndex:1021; }","duration":"170.873991ms","start":"2024-09-12T21:29:53.050704Z","end":"2024-09-12T21:29:53.221578Z","steps":["trace[27562101] 'read index received'  (duration: 168.303861ms)","trace[27562101] 'applied index is now lower than readState.Index'  (duration: 2.569755ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-12T21:29:53.221681Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.966917ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:29:53.221697Z","caller":"traceutil/trace.go:171","msg":"trace[465961155] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:997; }","duration":"170.990803ms","start":"2024-09-12T21:29:53.050704Z","end":"2024-09-12T21:29:53.221694Z","steps":["trace[465961155] 'agreement among raft nodes before linearized reading'  (duration: 170.956804ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:39:15.544028Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1854}
	{"level":"info","ts":"2024-09-12T21:39:15.638731Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1854,"took":"90.596208ms","hash":3668390359,"current-db-size-bytes":8826880,"current-db-size":"8.8 MB","current-db-size-in-use-bytes":4870144,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-09-12T21:39:15.638848Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3668390359,"revision":1854,"compact-revision":-1}
	
	
	==> gcp-auth [5494ecbe6cb1] <==
	2024/09/12 21:32:05 GCP Auth Webhook started!
	2024/09/12 21:32:23 Ready to marshal response ...
	2024/09/12 21:32:23 Ready to write response ...
	2024/09/12 21:32:23 Ready to marshal response ...
	2024/09/12 21:32:23 Ready to write response ...
	2024/09/12 21:32:47 Ready to marshal response ...
	2024/09/12 21:32:47 Ready to write response ...
	2024/09/12 21:32:47 Ready to marshal response ...
	2024/09/12 21:32:47 Ready to write response ...
	2024/09/12 21:32:47 Ready to marshal response ...
	2024/09/12 21:32:47 Ready to write response ...
	2024/09/12 21:40:51 Ready to marshal response ...
	2024/09/12 21:40:51 Ready to write response ...
	2024/09/12 21:40:58 Ready to marshal response ...
	2024/09/12 21:40:58 Ready to write response ...
	2024/09/12 21:41:18 Ready to marshal response ...
	2024/09/12 21:41:18 Ready to write response ...
	2024/09/12 21:41:49 Ready to marshal response ...
	2024/09/12 21:41:49 Ready to write response ...
	2024/09/12 21:41:49 Ready to marshal response ...
	2024/09/12 21:41:49 Ready to write response ...
	
	
	==> kernel <==
	 21:41:59 up 13 min,  0 users,  load average: 0.65, 0.54, 0.42
	Linux addons-094000 5.10.207 #1 SMP PREEMPT Thu Sep 12 17:20:51 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9ce52093e2b2] <==
	I0912 21:32:37.882391       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0912 21:32:37.950853       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0912 21:32:38.035889       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0912 21:32:38.138804       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0912 21:32:38.815120       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0912 21:32:38.825391       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0912 21:32:38.883184       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0912 21:32:38.895626       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0912 21:32:39.058420       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0912 21:32:39.141436       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W0912 21:32:39.175415       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	I0912 21:40:59.762265       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0912 21:41:32.705258       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:41:32.705283       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:41:32.713172       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:41:32.713193       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:41:32.731008       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:41:32.731032       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:41:32.731869       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:41:32.731911       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:41:32.745359       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:41:32.745376       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0912 21:41:33.731673       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0912 21:41:33.746238       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0912 21:41:33.816212       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [2240bd807709] <==
	E0912 21:41:41.992527       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:41:43.482501       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:41:43.482593       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:41:46.062471       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:41:46.062579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0912 21:41:48.234720       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W0912 21:41:49.668040       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:41:49.668063       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:41:49.724787       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:41:49.724812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:41:50.906675       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:41:50.906708       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:41:51.335623       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:41:51.335766       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:41:52.399081       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:41:52.399194       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0912 21:41:53.456515       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0912 21:41:53.456828       1 shared_informer.go:320] Caches are synced for resource quota
	I0912 21:41:53.896185       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0912 21:41:53.896233       1 shared_informer.go:320] Caches are synced for garbage collector
	W0912 21:41:55.663706       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:41:55.663840       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:41:56.525267       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:41:56.525388       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0912 21:41:58.984401       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="1.459µs"
	
	
	==> kube-proxy [3ed42e46d3da] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0912 21:29:24.475496       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0912 21:29:24.480217       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.2"]
	E0912 21:29:24.480257       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0912 21:29:24.492666       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0912 21:29:24.492738       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 21:29:24.492765       1 server_linux.go:169] "Using iptables Proxier"
	I0912 21:29:24.493697       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0912 21:29:24.493976       1 server.go:483] "Version info" version="v1.31.1"
	I0912 21:29:24.493999       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 21:29:24.494971       1 config.go:199] "Starting service config controller"
	I0912 21:29:24.495136       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0912 21:29:24.495245       1 config.go:105] "Starting endpoint slice config controller"
	I0912 21:29:24.495275       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0912 21:29:24.495725       1 config.go:328] "Starting node config controller"
	I0912 21:29:24.496392       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0912 21:29:24.596727       1 shared_informer.go:320] Caches are synced for node config
	I0912 21:29:24.596749       1 shared_informer.go:320] Caches are synced for service config
	I0912 21:29:24.596755       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [44fe2d901003] <==
	W0912 21:29:16.438726       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0912 21:29:16.438762       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:29:16.438801       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0912 21:29:16.438951       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0912 21:29:16.439007       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0912 21:29:16.439035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:29:16.439088       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0912 21:29:16.439113       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0912 21:29:16.439163       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0912 21:29:16.439188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:29:16.439221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0912 21:29:16.439255       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:29:16.439291       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0912 21:29:16.439326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:29:16.439368       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0912 21:29:16.439388       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:29:16.439438       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0912 21:29:16.439460       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:29:16.439530       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0912 21:29:16.439555       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:29:17.279936       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0912 21:29:17.280053       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:29:17.329245       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0912 21:29:17.329454       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0912 21:29:17.736378       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 12 21:41:54 addons-094000 kubelet[2055]: I0912 21:41:54.813429    2055 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/398d8dbb-345b-4421-b43b-31488b83f81e-gcp-creds\") pod \"test-local-path\" (UID: \"398d8dbb-345b-4421-b43b-31488b83f81e\") " pod="default/test-local-path"
	Sep 12 21:41:54 addons-094000 kubelet[2055]: I0912 21:41:54.813456    2055 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ccf8019f-9494-470c-adb1-2ca3643c7b43\" (UniqueName: \"kubernetes.io/host-path/398d8dbb-345b-4421-b43b-31488b83f81e-pvc-ccf8019f-9494-470c-adb1-2ca3643c7b43\") pod \"test-local-path\" (UID: \"398d8dbb-345b-4421-b43b-31488b83f81e\") " pod="default/test-local-path"
	Sep 12 21:41:54 addons-094000 kubelet[2055]: I0912 21:41:54.813473    2055 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9fkg\" (UniqueName: \"kubernetes.io/projected/398d8dbb-345b-4421-b43b-31488b83f81e-kube-api-access-l9fkg\") pod \"test-local-path\" (UID: \"398d8dbb-345b-4421-b43b-31488b83f81e\") " pod="default/test-local-path"
	Sep 12 21:41:58 addons-094000 kubelet[2055]: E0912 21:41:58.599403    2055 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="fe3278f3-e2dd-4f79-ae0c-96d5872e626d"
	Sep 12 21:41:58 addons-094000 kubelet[2055]: I0912 21:41:58.950937    2055 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/398d8dbb-345b-4421-b43b-31488b83f81e-pvc-ccf8019f-9494-470c-adb1-2ca3643c7b43\") pod \"398d8dbb-345b-4421-b43b-31488b83f81e\" (UID: \"398d8dbb-345b-4421-b43b-31488b83f81e\") "
	Sep 12 21:41:58 addons-094000 kubelet[2055]: I0912 21:41:58.950954    2055 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/58ed4c4c-7791-49d7-a8b5-fb0f2e78a4af-gcp-creds\") pod \"58ed4c4c-7791-49d7-a8b5-fb0f2e78a4af\" (UID: \"58ed4c4c-7791-49d7-a8b5-fb0f2e78a4af\") "
	Sep 12 21:41:58 addons-094000 kubelet[2055]: I0912 21:41:58.950978    2055 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz6bk\" (UniqueName: \"kubernetes.io/projected/58ed4c4c-7791-49d7-a8b5-fb0f2e78a4af-kube-api-access-pz6bk\") pod \"58ed4c4c-7791-49d7-a8b5-fb0f2e78a4af\" (UID: \"58ed4c4c-7791-49d7-a8b5-fb0f2e78a4af\") "
	Sep 12 21:41:58 addons-094000 kubelet[2055]: I0912 21:41:58.950987    2055 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/398d8dbb-345b-4421-b43b-31488b83f81e-gcp-creds\") pod \"398d8dbb-345b-4421-b43b-31488b83f81e\" (UID: \"398d8dbb-345b-4421-b43b-31488b83f81e\") "
	Sep 12 21:41:58 addons-094000 kubelet[2055]: I0912 21:41:58.950997    2055 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9fkg\" (UniqueName: \"kubernetes.io/projected/398d8dbb-345b-4421-b43b-31488b83f81e-kube-api-access-l9fkg\") pod \"398d8dbb-345b-4421-b43b-31488b83f81e\" (UID: \"398d8dbb-345b-4421-b43b-31488b83f81e\") "
	Sep 12 21:41:58 addons-094000 kubelet[2055]: I0912 21:41:58.951168    2055 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58ed4c4c-7791-49d7-a8b5-fb0f2e78a4af-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "58ed4c4c-7791-49d7-a8b5-fb0f2e78a4af" (UID: "58ed4c4c-7791-49d7-a8b5-fb0f2e78a4af"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 12 21:41:58 addons-094000 kubelet[2055]: I0912 21:41:58.951187    2055 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/398d8dbb-345b-4421-b43b-31488b83f81e-pvc-ccf8019f-9494-470c-adb1-2ca3643c7b43" (OuterVolumeSpecName: "data") pod "398d8dbb-345b-4421-b43b-31488b83f81e" (UID: "398d8dbb-345b-4421-b43b-31488b83f81e"). InnerVolumeSpecName "pvc-ccf8019f-9494-470c-adb1-2ca3643c7b43". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 12 21:41:58 addons-094000 kubelet[2055]: I0912 21:41:58.951480    2055 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/398d8dbb-345b-4421-b43b-31488b83f81e-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "398d8dbb-345b-4421-b43b-31488b83f81e" (UID: "398d8dbb-345b-4421-b43b-31488b83f81e"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 12 21:41:58 addons-094000 kubelet[2055]: I0912 21:41:58.951627    2055 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/398d8dbb-345b-4421-b43b-31488b83f81e-kube-api-access-l9fkg" (OuterVolumeSpecName: "kube-api-access-l9fkg") pod "398d8dbb-345b-4421-b43b-31488b83f81e" (UID: "398d8dbb-345b-4421-b43b-31488b83f81e"). InnerVolumeSpecName "kube-api-access-l9fkg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 12 21:41:58 addons-094000 kubelet[2055]: I0912 21:41:58.951796    2055 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58ed4c4c-7791-49d7-a8b5-fb0f2e78a4af-kube-api-access-pz6bk" (OuterVolumeSpecName: "kube-api-access-pz6bk") pod "58ed4c4c-7791-49d7-a8b5-fb0f2e78a4af" (UID: "58ed4c4c-7791-49d7-a8b5-fb0f2e78a4af"). InnerVolumeSpecName "kube-api-access-pz6bk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 12 21:41:59 addons-094000 kubelet[2055]: I0912 21:41:59.052292    2055 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/398d8dbb-345b-4421-b43b-31488b83f81e-gcp-creds\") on node \"addons-094000\" DevicePath \"\""
	Sep 12 21:41:59 addons-094000 kubelet[2055]: I0912 21:41:59.052304    2055 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-l9fkg\" (UniqueName: \"kubernetes.io/projected/398d8dbb-345b-4421-b43b-31488b83f81e-kube-api-access-l9fkg\") on node \"addons-094000\" DevicePath \"\""
	Sep 12 21:41:59 addons-094000 kubelet[2055]: I0912 21:41:59.052310    2055 reconciler_common.go:288] "Volume detached for volume \"pvc-ccf8019f-9494-470c-adb1-2ca3643c7b43\" (UniqueName: \"kubernetes.io/host-path/398d8dbb-345b-4421-b43b-31488b83f81e-pvc-ccf8019f-9494-470c-adb1-2ca3643c7b43\") on node \"addons-094000\" DevicePath \"\""
	Sep 12 21:41:59 addons-094000 kubelet[2055]: I0912 21:41:59.052324    2055 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/58ed4c4c-7791-49d7-a8b5-fb0f2e78a4af-gcp-creds\") on node \"addons-094000\" DevicePath \"\""
	Sep 12 21:41:59 addons-094000 kubelet[2055]: I0912 21:41:59.052329    2055 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pz6bk\" (UniqueName: \"kubernetes.io/projected/58ed4c4c-7791-49d7-a8b5-fb0f2e78a4af-kube-api-access-pz6bk\") on node \"addons-094000\" DevicePath \"\""
	Sep 12 21:41:59 addons-094000 kubelet[2055]: I0912 21:41:59.353205    2055 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq7x9\" (UniqueName: \"kubernetes.io/projected/d42d83d3-de78-4a99-ab0d-4539040c1a33-kube-api-access-dq7x9\") pod \"d42d83d3-de78-4a99-ab0d-4539040c1a33\" (UID: \"d42d83d3-de78-4a99-ab0d-4539040c1a33\") "
	Sep 12 21:41:59 addons-094000 kubelet[2055]: I0912 21:41:59.353230    2055 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxn2p\" (UniqueName: \"kubernetes.io/projected/21dafca9-6625-404e-9961-8c638a8f1694-kube-api-access-rxn2p\") pod \"21dafca9-6625-404e-9961-8c638a8f1694\" (UID: \"21dafca9-6625-404e-9961-8c638a8f1694\") "
	Sep 12 21:41:59 addons-094000 kubelet[2055]: I0912 21:41:59.354210    2055 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21dafca9-6625-404e-9961-8c638a8f1694-kube-api-access-rxn2p" (OuterVolumeSpecName: "kube-api-access-rxn2p") pod "21dafca9-6625-404e-9961-8c638a8f1694" (UID: "21dafca9-6625-404e-9961-8c638a8f1694"). InnerVolumeSpecName "kube-api-access-rxn2p". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 12 21:41:59 addons-094000 kubelet[2055]: I0912 21:41:59.354497    2055 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d42d83d3-de78-4a99-ab0d-4539040c1a33-kube-api-access-dq7x9" (OuterVolumeSpecName: "kube-api-access-dq7x9") pod "d42d83d3-de78-4a99-ab0d-4539040c1a33" (UID: "d42d83d3-de78-4a99-ab0d-4539040c1a33"). InnerVolumeSpecName "kube-api-access-dq7x9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 12 21:41:59 addons-094000 kubelet[2055]: I0912 21:41:59.453493    2055 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rxn2p\" (UniqueName: \"kubernetes.io/projected/21dafca9-6625-404e-9961-8c638a8f1694-kube-api-access-rxn2p\") on node \"addons-094000\" DevicePath \"\""
	Sep 12 21:41:59 addons-094000 kubelet[2055]: I0912 21:41:59.453508    2055 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-dq7x9\" (UniqueName: \"kubernetes.io/projected/d42d83d3-de78-4a99-ab0d-4539040c1a33-kube-api-access-dq7x9\") on node \"addons-094000\" DevicePath \"\""
	
	
	==> storage-provisioner [01a23a6d9162] <==
	I0912 21:29:24.917689       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0912 21:29:24.923136       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0912 21:29:24.923158       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0912 21:29:24.935406       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0912 21:29:24.935484       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-094000_74258189-11de-4e06-8a56-1e9d51dca455!
	I0912 21:29:24.936210       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e13dad6e-60c5-42c7-9d5e-283ea5eee1cc", APIVersion:"v1", ResourceVersion:"372", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-094000_74258189-11de-4e06-8a56-1e9d51dca455 became leader
	I0912 21:29:25.039916       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-094000_74258189-11de-4e06-8a56-1e9d51dca455!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-094000 -n addons-094000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-094000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-gg68k ingress-nginx-admission-patch-trc86 helper-pod-delete-pvc-ccf8019f-9494-470c-adb1-2ca3643c7b43
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-094000 describe pod busybox ingress-nginx-admission-create-gg68k ingress-nginx-admission-patch-trc86 helper-pod-delete-pvc-ccf8019f-9494-470c-adb1-2ca3643c7b43
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-094000 describe pod busybox ingress-nginx-admission-create-gg68k ingress-nginx-admission-patch-trc86 helper-pod-delete-pvc-ccf8019f-9494-470c-adb1-2ca3643c7b43: exit status 1 (41.679125ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-094000/192.168.105.2
	Start Time:       Thu, 12 Sep 2024 14:32:47 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jgd8f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-jgd8f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m12s                  default-scheduler  Successfully assigned default/busybox to addons-094000
	  Normal   Pulling    7m41s (x4 over 9m12s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m41s (x4 over 9m11s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m41s (x4 over 9m11s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m25s (x6 over 9m11s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m7s (x20 over 9m11s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-gg68k" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-trc86" not found
	Error from server (NotFound): pods "helper-pod-delete-pvc-ccf8019f-9494-470c-adb1-2ca3643c7b43" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-094000 describe pod busybox ingress-nginx-admission-create-gg68k ingress-nginx-admission-patch-trc86 helper-pod-delete-pvc-ccf8019f-9494-470c-adb1-2ca3643c7b43: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.33s)

                                                
                                    
x
+
TestCertOptions (10.17s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-450000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-450000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.908270333s)

                                                
                                                
-- stdout --
	* [cert-options-450000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-450000" primary control-plane node in "cert-options-450000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-450000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-450000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-450000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-450000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-450000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (80.924042ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-450000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-450000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-450000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-450000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-450000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-450000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (39.245792ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-450000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-450000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-450000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-450000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-450000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-12 15:14:43.791683 -0700 PDT m=+2804.253856542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-450000 -n cert-options-450000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-450000 -n cert-options-450000: exit status 7 (29.581625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-450000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-450000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-450000
--- FAIL: TestCertOptions (10.17s)

                                                
                                    
x
+
TestCertExpiration (195.36s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-152000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-152000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.04304125s)

                                                
                                                
-- stdout --
	* [cert-expiration-152000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-152000" primary control-plane node in "cert-expiration-152000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-152000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-152000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-152000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-152000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-152000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.180829166s)

                                                
                                                
-- stdout --
	* [cert-expiration-152000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-152000" primary control-plane node in "cert-expiration-152000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-152000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-152000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-152000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-152000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-152000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-152000" primary control-plane node in "cert-expiration-152000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-152000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-152000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-152000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-12 15:17:43.780112 -0700 PDT m=+2984.247333542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-152000 -n cert-expiration-152000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-152000 -n cert-expiration-152000: exit status 7 (56.3585ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-152000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-152000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-152000
--- FAIL: TestCertExpiration (195.36s)

                                                
                                    
x
+
TestDockerFlags (10.34s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-239000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-239000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.10189125s)

                                                
                                                
-- stdout --
	* [docker-flags-239000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-239000" primary control-plane node in "docker-flags-239000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-239000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:14:23.419365    4601 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:14:23.419479    4601 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:14:23.419483    4601 out.go:358] Setting ErrFile to fd 2...
	I0912 15:14:23.419485    4601 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:14:23.419609    4601 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:14:23.420768    4601 out.go:352] Setting JSON to false
	I0912 15:14:23.437056    4601 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4427,"bootTime":1726174836,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:14:23.437124    4601 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:14:23.443538    4601 out.go:177] * [docker-flags-239000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:14:23.450316    4601 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:14:23.450399    4601 notify.go:220] Checking for updates...
	I0912 15:14:23.458310    4601 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:14:23.461308    4601 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:14:23.464352    4601 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:14:23.467358    4601 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:14:23.470363    4601 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:14:23.473607    4601 config.go:182] Loaded profile config "force-systemd-flag-381000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:14:23.473676    4601 config.go:182] Loaded profile config "multinode-323000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:14:23.473733    4601 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:14:23.478315    4601 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:14:23.485306    4601 start.go:297] selected driver: qemu2
	I0912 15:14:23.485312    4601 start.go:901] validating driver "qemu2" against <nil>
	I0912 15:14:23.485318    4601 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:14:23.487700    4601 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 15:14:23.490227    4601 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:14:23.493330    4601 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0912 15:14:23.493361    4601 cni.go:84] Creating CNI manager for ""
	I0912 15:14:23.493368    4601 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:14:23.493378    4601 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 15:14:23.493407    4601 start.go:340] cluster config:
	{Name:docker-flags-239000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:14:23.497151    4601 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:14:23.504330    4601 out.go:177] * Starting "docker-flags-239000" primary control-plane node in "docker-flags-239000" cluster
	I0912 15:14:23.508362    4601 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 15:14:23.508378    4601 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 15:14:23.508389    4601 cache.go:56] Caching tarball of preloaded images
	I0912 15:14:23.508448    4601 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:14:23.508454    4601 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 15:14:23.508528    4601 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/docker-flags-239000/config.json ...
	I0912 15:14:23.508540    4601 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/docker-flags-239000/config.json: {Name:mk5b53b16adf31dc04f58bbc8e1db921385146e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:14:23.508768    4601 start.go:360] acquireMachinesLock for docker-flags-239000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:14:23.508811    4601 start.go:364] duration metric: took 33.792µs to acquireMachinesLock for "docker-flags-239000"
	I0912 15:14:23.508824    4601 start.go:93] Provisioning new machine with config: &{Name:docker-flags-239000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:14:23.508854    4601 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:14:23.517319    4601 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0912 15:14:23.535758    4601 start.go:159] libmachine.API.Create for "docker-flags-239000" (driver="qemu2")
	I0912 15:14:23.535785    4601 client.go:168] LocalClient.Create starting
	I0912 15:14:23.535851    4601 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:14:23.535884    4601 main.go:141] libmachine: Decoding PEM data...
	I0912 15:14:23.535896    4601 main.go:141] libmachine: Parsing certificate...
	I0912 15:14:23.535931    4601 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:14:23.535956    4601 main.go:141] libmachine: Decoding PEM data...
	I0912 15:14:23.535967    4601 main.go:141] libmachine: Parsing certificate...
	I0912 15:14:23.536414    4601 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:14:23.712532    4601 main.go:141] libmachine: Creating SSH key...
	I0912 15:14:23.890105    4601 main.go:141] libmachine: Creating Disk image...
	I0912 15:14:23.890111    4601 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:14:23.890367    4601 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/docker-flags-239000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/docker-flags-239000/disk.qcow2
	I0912 15:14:23.899770    4601 main.go:141] libmachine: STDOUT: 
	I0912 15:14:23.899790    4601 main.go:141] libmachine: STDERR: 
	I0912 15:14:23.899849    4601 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/docker-flags-239000/disk.qcow2 +20000M
	I0912 15:14:23.907722    4601 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:14:23.907738    4601 main.go:141] libmachine: STDERR: 
	I0912 15:14:23.907756    4601 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/docker-flags-239000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/docker-flags-239000/disk.qcow2
	I0912 15:14:23.907761    4601 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:14:23.907777    4601 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:14:23.907817    4601 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/docker-flags-239000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/docker-flags-239000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/docker-flags-239000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:e0:56:a4:5f:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/docker-flags-239000/disk.qcow2
	I0912 15:14:23.909468    4601 main.go:141] libmachine: STDOUT: 
	I0912 15:14:23.909484    4601 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:14:23.909502    4601 client.go:171] duration metric: took 373.721958ms to LocalClient.Create
	I0912 15:14:25.911607    4601 start.go:128] duration metric: took 2.402800958s to createHost
	I0912 15:14:25.911650    4601 start.go:83] releasing machines lock for "docker-flags-239000", held for 2.402896s
	W0912 15:14:25.911722    4601 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:14:25.935811    4601 out.go:177] * Deleting "docker-flags-239000" in qemu2 ...
	W0912 15:14:25.962646    4601 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:14:25.962666    4601 start.go:729] Will try again in 5 seconds ...
	I0912 15:14:30.964784    4601 start.go:360] acquireMachinesLock for docker-flags-239000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:14:31.090737    4601 start.go:364] duration metric: took 125.824167ms to acquireMachinesLock for "docker-flags-239000"
	I0912 15:14:31.090895    4601 start.go:93] Provisioning new machine with config: &{Name:docker-flags-239000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:14:31.091135    4601 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:14:31.107739    4601 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0912 15:14:31.157536    4601 start.go:159] libmachine.API.Create for "docker-flags-239000" (driver="qemu2")
	I0912 15:14:31.157586    4601 client.go:168] LocalClient.Create starting
	I0912 15:14:31.157715    4601 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:14:31.157774    4601 main.go:141] libmachine: Decoding PEM data...
	I0912 15:14:31.157790    4601 main.go:141] libmachine: Parsing certificate...
	I0912 15:14:31.157854    4601 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:14:31.157899    4601 main.go:141] libmachine: Decoding PEM data...
	I0912 15:14:31.157914    4601 main.go:141] libmachine: Parsing certificate...
	I0912 15:14:31.158656    4601 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:14:31.354274    4601 main.go:141] libmachine: Creating SSH key...
	I0912 15:14:31.418575    4601 main.go:141] libmachine: Creating Disk image...
	I0912 15:14:31.418580    4601 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:14:31.418782    4601 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/docker-flags-239000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/docker-flags-239000/disk.qcow2
	I0912 15:14:31.428035    4601 main.go:141] libmachine: STDOUT: 
	I0912 15:14:31.428057    4601 main.go:141] libmachine: STDERR: 
	I0912 15:14:31.428094    4601 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/docker-flags-239000/disk.qcow2 +20000M
	I0912 15:14:31.435861    4601 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:14:31.435886    4601 main.go:141] libmachine: STDERR: 
	I0912 15:14:31.435901    4601 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/docker-flags-239000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/docker-flags-239000/disk.qcow2
	I0912 15:14:31.435905    4601 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:14:31.435915    4601 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:14:31.435944    4601 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/docker-flags-239000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/docker-flags-239000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/docker-flags-239000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:e1:59:09:fe:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/docker-flags-239000/disk.qcow2
	I0912 15:14:31.437609    4601 main.go:141] libmachine: STDOUT: 
	I0912 15:14:31.437627    4601 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:14:31.437638    4601 client.go:171] duration metric: took 280.054791ms to LocalClient.Create
	I0912 15:14:33.439726    4601 start.go:128] duration metric: took 2.348625125s to createHost
	I0912 15:14:33.439776    4601 start.go:83] releasing machines lock for "docker-flags-239000", held for 2.349038459s
	W0912 15:14:33.440186    4601 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-239000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-239000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:14:33.458836    4601 out.go:201] 
	W0912 15:14:33.466680    4601 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:14:33.466697    4601 out.go:270] * 
	* 
	W0912 15:14:33.469002    4601 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:14:33.478635    4601 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-239000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-239000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-239000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (79.403ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-239000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-239000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-239000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-239000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-239000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-239000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-239000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-239000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-239000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (44.74975ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-239000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-239000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-239000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-239000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-239000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-239000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-09-12 15:14:33.6206 -0700 PDT m=+2794.082488917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-239000 -n docker-flags-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-239000 -n docker-flags-239000: exit status 7 (28.734208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-239000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-239000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-239000
--- FAIL: TestDockerFlags (10.34s)

                                                
                                    
x
+
TestForceSystemdFlag (10.4s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-381000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-381000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.210554458s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-381000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-381000" primary control-plane node in "force-systemd-flag-381000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-381000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:14:18.183100    4580 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:14:18.183213    4580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:14:18.183215    4580 out.go:358] Setting ErrFile to fd 2...
	I0912 15:14:18.183218    4580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:14:18.183333    4580 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:14:18.184434    4580 out.go:352] Setting JSON to false
	I0912 15:14:18.200412    4580 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4422,"bootTime":1726174836,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:14:18.200481    4580 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:14:18.207441    4580 out.go:177] * [force-systemd-flag-381000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:14:18.215334    4580 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:14:18.215380    4580 notify.go:220] Checking for updates...
	I0912 15:14:18.226325    4580 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:14:18.229297    4580 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:14:18.232351    4580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:14:18.235283    4580 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:14:18.238311    4580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:14:18.241638    4580 config.go:182] Loaded profile config "force-systemd-env-236000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:14:18.241707    4580 config.go:182] Loaded profile config "multinode-323000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:14:18.241755    4580 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:14:18.245313    4580 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:14:18.252290    4580 start.go:297] selected driver: qemu2
	I0912 15:14:18.252296    4580 start.go:901] validating driver "qemu2" against <nil>
	I0912 15:14:18.252302    4580 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:14:18.254652    4580 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 15:14:18.256333    4580 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:14:18.259382    4580 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0912 15:14:18.259394    4580 cni.go:84] Creating CNI manager for ""
	I0912 15:14:18.259402    4580 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:14:18.259406    4580 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 15:14:18.259435    4580 start.go:340] cluster config:
	{Name:force-systemd-flag-381000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-381000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:14:18.263081    4580 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:14:18.270323    4580 out.go:177] * Starting "force-systemd-flag-381000" primary control-plane node in "force-systemd-flag-381000" cluster
	I0912 15:14:18.274354    4580 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 15:14:18.274373    4580 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 15:14:18.274388    4580 cache.go:56] Caching tarball of preloaded images
	I0912 15:14:18.274456    4580 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:14:18.274462    4580 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 15:14:18.274522    4580 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/force-systemd-flag-381000/config.json ...
	I0912 15:14:18.274534    4580 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/force-systemd-flag-381000/config.json: {Name:mk4a98c8242222824199280399d3c0e1e2d0546a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:14:18.274776    4580 start.go:360] acquireMachinesLock for force-systemd-flag-381000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:14:18.274814    4580 start.go:364] duration metric: took 29.541µs to acquireMachinesLock for "force-systemd-flag-381000"
	I0912 15:14:18.274828    4580 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-381000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-381000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:14:18.274862    4580 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:14:18.283281    4580 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0912 15:14:18.301894    4580 start.go:159] libmachine.API.Create for "force-systemd-flag-381000" (driver="qemu2")
	I0912 15:14:18.301918    4580 client.go:168] LocalClient.Create starting
	I0912 15:14:18.301982    4580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:14:18.302015    4580 main.go:141] libmachine: Decoding PEM data...
	I0912 15:14:18.302023    4580 main.go:141] libmachine: Parsing certificate...
	I0912 15:14:18.302073    4580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:14:18.302100    4580 main.go:141] libmachine: Decoding PEM data...
	I0912 15:14:18.302109    4580 main.go:141] libmachine: Parsing certificate...
	I0912 15:14:18.302437    4580 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:14:18.460685    4580 main.go:141] libmachine: Creating SSH key...
	I0912 15:14:18.737591    4580 main.go:141] libmachine: Creating Disk image...
	I0912 15:14:18.737602    4580 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:14:18.738088    4580 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-flag-381000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-flag-381000/disk.qcow2
	I0912 15:14:18.747795    4580 main.go:141] libmachine: STDOUT: 
	I0912 15:14:18.747816    4580 main.go:141] libmachine: STDERR: 
	I0912 15:14:18.747862    4580 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-flag-381000/disk.qcow2 +20000M
	I0912 15:14:18.755675    4580 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:14:18.755691    4580 main.go:141] libmachine: STDERR: 
	I0912 15:14:18.755708    4580 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-flag-381000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-flag-381000/disk.qcow2
	I0912 15:14:18.755715    4580 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:14:18.755730    4580 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:14:18.755756    4580 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-flag-381000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-flag-381000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-flag-381000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:31:51:ff:39:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-flag-381000/disk.qcow2
	I0912 15:14:18.757324    4580 main.go:141] libmachine: STDOUT: 
	I0912 15:14:18.757346    4580 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:14:18.757369    4580 client.go:171] duration metric: took 455.45725ms to LocalClient.Create
	I0912 15:14:20.759485    4580 start.go:128] duration metric: took 2.484672292s to createHost
	I0912 15:14:20.759603    4580 start.go:83] releasing machines lock for "force-systemd-flag-381000", held for 2.48484675s
	W0912 15:14:20.759656    4580 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:14:20.784675    4580 out.go:177] * Deleting "force-systemd-flag-381000" in qemu2 ...
	W0912 15:14:20.810435    4580 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:14:20.810459    4580 start.go:729] Will try again in 5 seconds ...
	I0912 15:14:25.812480    4580 start.go:360] acquireMachinesLock for force-systemd-flag-381000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:14:25.911800    4580 start.go:364] duration metric: took 99.1695ms to acquireMachinesLock for "force-systemd-flag-381000"
	I0912 15:14:25.911937    4580 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-381000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-381000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:14:25.912198    4580 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:14:25.926776    4580 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0912 15:14:25.977072    4580 start.go:159] libmachine.API.Create for "force-systemd-flag-381000" (driver="qemu2")
	I0912 15:14:25.977117    4580 client.go:168] LocalClient.Create starting
	I0912 15:14:25.977236    4580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:14:25.977312    4580 main.go:141] libmachine: Decoding PEM data...
	I0912 15:14:25.977326    4580 main.go:141] libmachine: Parsing certificate...
	I0912 15:14:25.977389    4580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:14:25.977438    4580 main.go:141] libmachine: Decoding PEM data...
	I0912 15:14:25.977451    4580 main.go:141] libmachine: Parsing certificate...
	I0912 15:14:25.978059    4580 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:14:26.160316    4580 main.go:141] libmachine: Creating SSH key...
	I0912 15:14:26.291767    4580 main.go:141] libmachine: Creating Disk image...
	I0912 15:14:26.291776    4580 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:14:26.291968    4580 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-flag-381000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-flag-381000/disk.qcow2
	I0912 15:14:26.301308    4580 main.go:141] libmachine: STDOUT: 
	I0912 15:14:26.301325    4580 main.go:141] libmachine: STDERR: 
	I0912 15:14:26.301372    4580 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-flag-381000/disk.qcow2 +20000M
	I0912 15:14:26.309296    4580 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:14:26.309309    4580 main.go:141] libmachine: STDERR: 
	I0912 15:14:26.309321    4580 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-flag-381000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-flag-381000/disk.qcow2
	I0912 15:14:26.309328    4580 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:14:26.309339    4580 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:14:26.309375    4580 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-flag-381000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-flag-381000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-flag-381000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:3b:44:ad:ff:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-flag-381000/disk.qcow2
	I0912 15:14:26.311021    4580 main.go:141] libmachine: STDOUT: 
	I0912 15:14:26.311034    4580 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:14:26.311048    4580 client.go:171] duration metric: took 333.932958ms to LocalClient.Create
	I0912 15:14:28.313282    4580 start.go:128] duration metric: took 2.401071375s to createHost
	I0912 15:14:28.313390    4580 start.go:83] releasing machines lock for "force-systemd-flag-381000", held for 2.401616917s
	W0912 15:14:28.313773    4580 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-381000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-381000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:14:28.331362    4580 out.go:201] 
	W0912 15:14:28.340333    4580 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:14:28.340372    4580 out.go:270] * 
	* 
	W0912 15:14:28.342244    4580 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:14:28.352269    4580 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-381000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-381000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-381000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (76.037542ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-381000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-381000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-381000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-12 15:14:28.445257 -0700 PDT m=+2788.907000209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-381000 -n force-systemd-flag-381000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-381000 -n force-systemd-flag-381000: exit status 7 (34.802958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-381000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-381000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-381000
--- FAIL: TestForceSystemdFlag (10.40s)

                                                
                                    
x
+
TestForceSystemdEnv (10.52s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-236000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-236000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.324586375s)

                                                
                                                
-- stdout --
	* [force-systemd-env-236000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-236000" primary control-plane node in "force-systemd-env-236000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-236000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:14:12.903744    4548 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:14:12.903860    4548 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:14:12.903863    4548 out.go:358] Setting ErrFile to fd 2...
	I0912 15:14:12.903865    4548 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:14:12.904004    4548 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:14:12.905109    4548 out.go:352] Setting JSON to false
	I0912 15:14:12.921842    4548 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4416,"bootTime":1726174836,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:14:12.921914    4548 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:14:12.927242    4548 out.go:177] * [force-systemd-env-236000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:14:12.936384    4548 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:14:12.936439    4548 notify.go:220] Checking for updates...
	I0912 15:14:12.943313    4548 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:14:12.946319    4548 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:14:12.949325    4548 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:14:12.957340    4548 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:14:12.960374    4548 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0912 15:14:12.963668    4548 config.go:182] Loaded profile config "multinode-323000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:14:12.963727    4548 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:14:12.967338    4548 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:14:12.974282    4548 start.go:297] selected driver: qemu2
	I0912 15:14:12.974287    4548 start.go:901] validating driver "qemu2" against <nil>
	I0912 15:14:12.974298    4548 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:14:12.976676    4548 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 15:14:12.980323    4548 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:14:12.984405    4548 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0912 15:14:12.984432    4548 cni.go:84] Creating CNI manager for ""
	I0912 15:14:12.984446    4548 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:14:12.984453    4548 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 15:14:12.984482    4548 start.go:340] cluster config:
	{Name:force-systemd-env-236000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-236000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:14:12.988372    4548 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:14:12.995323    4548 out.go:177] * Starting "force-systemd-env-236000" primary control-plane node in "force-systemd-env-236000" cluster
	I0912 15:14:12.999315    4548 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 15:14:12.999328    4548 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 15:14:12.999334    4548 cache.go:56] Caching tarball of preloaded images
	I0912 15:14:12.999395    4548 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:14:12.999399    4548 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 15:14:12.999451    4548 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/force-systemd-env-236000/config.json ...
	I0912 15:14:12.999461    4548 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/force-systemd-env-236000/config.json: {Name:mkbdd9e15e322c81135bdcf9d6425be207b1d25a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:14:12.999654    4548 start.go:360] acquireMachinesLock for force-systemd-env-236000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:14:12.999687    4548 start.go:364] duration metric: took 25.541µs to acquireMachinesLock for "force-systemd-env-236000"
	I0912 15:14:12.999699    4548 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-236000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-236000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:14:12.999730    4548 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:14:13.007288    4548 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0912 15:14:13.024113    4548 start.go:159] libmachine.API.Create for "force-systemd-env-236000" (driver="qemu2")
	I0912 15:14:13.024143    4548 client.go:168] LocalClient.Create starting
	I0912 15:14:13.024222    4548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:14:13.024256    4548 main.go:141] libmachine: Decoding PEM data...
	I0912 15:14:13.024263    4548 main.go:141] libmachine: Parsing certificate...
	I0912 15:14:13.024301    4548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:14:13.024324    4548 main.go:141] libmachine: Decoding PEM data...
	I0912 15:14:13.024336    4548 main.go:141] libmachine: Parsing certificate...
	I0912 15:14:13.024720    4548 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:14:13.212891    4548 main.go:141] libmachine: Creating SSH key...
	I0912 15:14:13.306438    4548 main.go:141] libmachine: Creating Disk image...
	I0912 15:14:13.306448    4548 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:14:13.306742    4548 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-env-236000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-env-236000/disk.qcow2
	I0912 15:14:13.316422    4548 main.go:141] libmachine: STDOUT: 
	I0912 15:14:13.316442    4548 main.go:141] libmachine: STDERR: 
	I0912 15:14:13.316492    4548 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-env-236000/disk.qcow2 +20000M
	I0912 15:14:13.324679    4548 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:14:13.324695    4548 main.go:141] libmachine: STDERR: 
	I0912 15:14:13.324707    4548 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-env-236000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-env-236000/disk.qcow2
	I0912 15:14:13.324711    4548 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:14:13.324725    4548 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:14:13.324753    4548 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-env-236000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-env-236000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-env-236000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:61:6c:79:dc:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-env-236000/disk.qcow2
	I0912 15:14:13.326493    4548 main.go:141] libmachine: STDOUT: 
	I0912 15:14:13.326508    4548 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:14:13.326532    4548 client.go:171] duration metric: took 302.393917ms to LocalClient.Create
	I0912 15:14:15.328698    4548 start.go:128] duration metric: took 2.329003959s to createHost
	I0912 15:14:15.328780    4548 start.go:83] releasing machines lock for "force-systemd-env-236000", held for 2.329148041s
	W0912 15:14:15.328903    4548 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:14:15.336155    4548 out.go:177] * Deleting "force-systemd-env-236000" in qemu2 ...
	W0912 15:14:15.369890    4548 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:14:15.369917    4548 start.go:729] Will try again in 5 seconds ...
	I0912 15:14:20.371946    4548 start.go:360] acquireMachinesLock for force-systemd-env-236000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:14:20.759729    4548 start.go:364] duration metric: took 387.6725ms to acquireMachinesLock for "force-systemd-env-236000"
	I0912 15:14:20.759883    4548 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-236000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-236000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:14:20.760074    4548 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:14:20.775697    4548 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0912 15:14:20.825137    4548 start.go:159] libmachine.API.Create for "force-systemd-env-236000" (driver="qemu2")
	I0912 15:14:20.825189    4548 client.go:168] LocalClient.Create starting
	I0912 15:14:20.825287    4548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:14:20.825358    4548 main.go:141] libmachine: Decoding PEM data...
	I0912 15:14:20.825389    4548 main.go:141] libmachine: Parsing certificate...
	I0912 15:14:20.825448    4548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:14:20.825493    4548 main.go:141] libmachine: Decoding PEM data...
	I0912 15:14:20.825506    4548 main.go:141] libmachine: Parsing certificate...
	I0912 15:14:20.826155    4548 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:14:21.011514    4548 main.go:141] libmachine: Creating SSH key...
	I0912 15:14:21.123624    4548 main.go:141] libmachine: Creating Disk image...
	I0912 15:14:21.123629    4548 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:14:21.123852    4548 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-env-236000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-env-236000/disk.qcow2
	I0912 15:14:21.133417    4548 main.go:141] libmachine: STDOUT: 
	I0912 15:14:21.133439    4548 main.go:141] libmachine: STDERR: 
	I0912 15:14:21.133484    4548 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-env-236000/disk.qcow2 +20000M
	I0912 15:14:21.141412    4548 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:14:21.141436    4548 main.go:141] libmachine: STDERR: 
	I0912 15:14:21.141447    4548 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-env-236000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-env-236000/disk.qcow2
	I0912 15:14:21.141452    4548 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:14:21.141460    4548 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:14:21.141491    4548 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-env-236000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-env-236000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-env-236000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:3c:e4:82:e8:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/force-systemd-env-236000/disk.qcow2
	I0912 15:14:21.143174    4548 main.go:141] libmachine: STDOUT: 
	I0912 15:14:21.143192    4548 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:14:21.143204    4548 client.go:171] duration metric: took 318.019583ms to LocalClient.Create
	I0912 15:14:23.145487    4548 start.go:128] duration metric: took 2.385428083s to createHost
	I0912 15:14:23.145558    4548 start.go:83] releasing machines lock for "force-systemd-env-236000", held for 2.385862375s
	W0912 15:14:23.145903    4548 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-236000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-236000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:14:23.162571    4548 out.go:201] 
	W0912 15:14:23.171427    4548 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:14:23.171445    4548 out.go:270] * 
	* 
	W0912 15:14:23.173521    4548 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:14:23.184382    4548 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-236000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-236000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-236000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (78.314958ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-236000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-236000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-236000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-12 15:14:23.280411 -0700 PDT m=+2783.742009376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-236000 -n force-systemd-env-236000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-236000 -n force-systemd-env-236000: exit status 7 (35.753625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-236000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-236000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-236000
--- FAIL: TestForceSystemdEnv (10.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (29.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-384000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-384000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-vc7hw" [1bff9fcb-c996-4630-ac27-ff2ca00b3c31] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-vc7hw" [1bff9fcb-c996-4630-ac27-ff2ca00b3c31] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.006933375s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:30546
functional_test.go:1661: error fetching http://192.168.105.4:30546: Get "http://192.168.105.4:30546": dial tcp 192.168.105.4:30546: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30546: Get "http://192.168.105.4:30546": dial tcp 192.168.105.4:30546: connect: connection refused
E0912 14:47:27.469541    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1661: error fetching http://192.168.105.4:30546: Get "http://192.168.105.4:30546": dial tcp 192.168.105.4:30546: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30546: Get "http://192.168.105.4:30546": dial tcp 192.168.105.4:30546: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30546: Get "http://192.168.105.4:30546": dial tcp 192.168.105.4:30546: connect: connection refused
2024/09/12 14:47:36 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1661: error fetching http://192.168.105.4:30546: Get "http://192.168.105.4:30546": dial tcp 192.168.105.4:30546: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30546: Get "http://192.168.105.4:30546": dial tcp 192.168.105.4:30546: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:30546: Get "http://192.168.105.4:30546": dial tcp 192.168.105.4:30546: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-384000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-vc7hw
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-384000/192.168.105.4
Start Time:       Thu, 12 Sep 2024 14:47:16 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://0df32a8efdd6d86a5749cd0350b0ccd395e5400bc81bc5200390cb682771200e
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Error
Exit Code:    1
Started:      Thu, 12 Sep 2024 14:47:36 -0700
Finished:     Thu, 12 Sep 2024 14:47:36 -0700
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Thu, 12 Sep 2024 14:47:20 -0700
Finished:     Thu, 12 Sep 2024 14:47:20 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7qjvf (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-7qjvf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age               From               Message
----     ------     ----              ----               -------
Normal   Scheduled  28s               default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-vc7hw to functional-384000
Normal   Pulling    28s               kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     24s               kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 3.884s (3.884s including waiting). Image size: 84957542 bytes.
Normal   Created    8s (x3 over 24s)  kubelet            Created container echoserver-arm
Normal   Started    8s (x3 over 24s)  kubelet            Started container echoserver-arm
Normal   Pulled     8s (x2 over 24s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    7s (x3 over 23s)  kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-vc7hw_default(1bff9fcb-c996-4630-ac27-ff2ca00b3c31)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-384000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-384000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.104.75.219
IPs:                      10.104.75.219
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30546/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-384000 -n functional-384000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                 Args                 |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-384000 ssh findmnt        | functional-384000 | jenkins | v1.34.0 | 12 Sep 24 14:47 PDT | 12 Sep 24 14:47 PDT |
	|                | -T /mount2                           |                   |         |         |                     |                     |
	| ssh            | functional-384000 ssh findmnt        | functional-384000 | jenkins | v1.34.0 | 12 Sep 24 14:47 PDT | 12 Sep 24 14:47 PDT |
	|                | -T /mount3                           |                   |         |         |                     |                     |
	| mount          | -p functional-384000                 | functional-384000 | jenkins | v1.34.0 | 12 Sep 24 14:47 PDT |                     |
	|                | --kill=true                          |                   |         |         |                     |                     |
	| addons         | functional-384000 addons list        | functional-384000 | jenkins | v1.34.0 | 12 Sep 24 14:47 PDT | 12 Sep 24 14:47 PDT |
	| addons         | functional-384000 addons list        | functional-384000 | jenkins | v1.34.0 | 12 Sep 24 14:47 PDT | 12 Sep 24 14:47 PDT |
	|                | -o json                              |                   |         |         |                     |                     |
	| service        | functional-384000 service            | functional-384000 | jenkins | v1.34.0 | 12 Sep 24 14:47 PDT | 12 Sep 24 14:47 PDT |
	|                | hello-node-connect --url             |                   |         |         |                     |                     |
	| service        | functional-384000 service list       | functional-384000 | jenkins | v1.34.0 | 12 Sep 24 14:47 PDT | 12 Sep 24 14:47 PDT |
	| service        | functional-384000 service list       | functional-384000 | jenkins | v1.34.0 | 12 Sep 24 14:47 PDT | 12 Sep 24 14:47 PDT |
	|                | -o json                              |                   |         |         |                     |                     |
	| service        | functional-384000 service            | functional-384000 | jenkins | v1.34.0 | 12 Sep 24 14:47 PDT | 12 Sep 24 14:47 PDT |
	|                | --namespace=default --https          |                   |         |         |                     |                     |
	|                | --url hello-node                     |                   |         |         |                     |                     |
	| service        | functional-384000                    | functional-384000 | jenkins | v1.34.0 | 12 Sep 24 14:47 PDT | 12 Sep 24 14:47 PDT |
	|                | service hello-node --url             |                   |         |         |                     |                     |
	|                | --format={{.IP}}                     |                   |         |         |                     |                     |
	| service        | functional-384000 service            | functional-384000 | jenkins | v1.34.0 | 12 Sep 24 14:47 PDT | 12 Sep 24 14:47 PDT |
	|                | hello-node --url                     |                   |         |         |                     |                     |
	| start          | -p functional-384000                 | functional-384000 | jenkins | v1.34.0 | 12 Sep 24 14:47 PDT |                     |
	|                | --dry-run --memory                   |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr              |                   |         |         |                     |                     |
	|                | --driver=qemu2                       |                   |         |         |                     |                     |
	| start          | -p functional-384000                 | functional-384000 | jenkins | v1.34.0 | 12 Sep 24 14:47 PDT |                     |
	|                | --dry-run --memory                   |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr              |                   |         |         |                     |                     |
	|                | --driver=qemu2                       |                   |         |         |                     |                     |
	| start          | -p functional-384000 --dry-run       | functional-384000 | jenkins | v1.34.0 | 12 Sep 24 14:47 PDT |                     |
	|                | --alsologtostderr -v=1               |                   |         |         |                     |                     |
	|                | --driver=qemu2                       |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                   | functional-384000 | jenkins | v1.34.0 | 12 Sep 24 14:47 PDT | 12 Sep 24 14:47 PDT |
	|                | -p functional-384000                 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                   |         |         |                     |                     |
	| image          | functional-384000                    | functional-384000 | jenkins | v1.34.0 | 12 Sep 24 14:47 PDT | 12 Sep 24 14:47 PDT |
	|                | image ls --format short              |                   |         |         |                     |                     |
	|                | --alsologtostderr                    |                   |         |         |                     |                     |
	| image          | functional-384000                    | functional-384000 | jenkins | v1.34.0 | 12 Sep 24 14:47 PDT | 12 Sep 24 14:47 PDT |
	|                | image ls --format json               |                   |         |         |                     |                     |
	|                | --alsologtostderr                    |                   |         |         |                     |                     |
	| image          | functional-384000                    | functional-384000 | jenkins | v1.34.0 | 12 Sep 24 14:47 PDT | 12 Sep 24 14:47 PDT |
	|                | image ls --format yaml               |                   |         |         |                     |                     |
	|                | --alsologtostderr                    |                   |         |         |                     |                     |
	| image          | functional-384000                    | functional-384000 | jenkins | v1.34.0 | 12 Sep 24 14:47 PDT | 12 Sep 24 14:47 PDT |
	|                | image ls --format table              |                   |         |         |                     |                     |
	|                | --alsologtostderr                    |                   |         |         |                     |                     |
	| ssh            | functional-384000 ssh pgrep          | functional-384000 | jenkins | v1.34.0 | 12 Sep 24 14:47 PDT |                     |
	|                | buildkitd                            |                   |         |         |                     |                     |
	| image          | functional-384000 image build -t     | functional-384000 | jenkins | v1.34.0 | 12 Sep 24 14:47 PDT | 12 Sep 24 14:47 PDT |
	|                | localhost/my-image:functional-384000 |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr     |                   |         |         |                     |                     |
	| image          | functional-384000 image ls           | functional-384000 | jenkins | v1.34.0 | 12 Sep 24 14:47 PDT | 12 Sep 24 14:47 PDT |
	| update-context | functional-384000                    | functional-384000 | jenkins | v1.34.0 | 12 Sep 24 14:47 PDT | 12 Sep 24 14:47 PDT |
	|                | update-context                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                   |         |         |                     |                     |
	| update-context | functional-384000                    | functional-384000 | jenkins | v1.34.0 | 12 Sep 24 14:47 PDT | 12 Sep 24 14:47 PDT |
	|                | update-context                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                   |         |         |                     |                     |
	| update-context | functional-384000                    | functional-384000 | jenkins | v1.34.0 | 12 Sep 24 14:47 PDT | 12 Sep 24 14:47 PDT |
	|                | update-context                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                   |         |         |                     |                     |
	|----------------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 14:47:29
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 14:47:29.676182    2950 out.go:345] Setting OutFile to fd 1 ...
	I0912 14:47:29.676300    2950 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 14:47:29.676302    2950 out.go:358] Setting ErrFile to fd 2...
	I0912 14:47:29.676308    2950 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 14:47:29.676415    2950 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 14:47:29.677621    2950 out.go:352] Setting JSON to false
	I0912 14:47:29.694733    2950 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2813,"bootTime":1726174836,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 14:47:29.694803    2950 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 14:47:29.698692    2950 out.go:177] * [functional-384000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 14:47:29.705751    2950 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 14:47:29.705822    2950 notify.go:220] Checking for updates...
	I0912 14:47:29.712754    2950 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 14:47:29.715758    2950 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 14:47:29.718766    2950 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 14:47:29.721737    2950 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 14:47:29.724767    2950 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 14:47:29.726272    2950 config.go:182] Loaded profile config "functional-384000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 14:47:29.726523    2950 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 14:47:29.730703    2950 out.go:177] * Using the qemu2 driver based on existing profile
	I0912 14:47:29.737587    2950 start.go:297] selected driver: qemu2
	I0912 14:47:29.737594    2950 start.go:901] validating driver "qemu2" against &{Name:functional-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-384000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 14:47:29.737664    2950 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 14:47:29.739882    2950 cni.go:84] Creating CNI manager for ""
	I0912 14:47:29.739895    2950 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 14:47:29.739935    2950 start.go:340] cluster config:
	{Name:functional-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-384000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 14:47:29.751727    2950 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Sep 12 21:47:35 functional-384000 dockerd[5658]: time="2024-09-12T21:47:35.776685187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 21:47:35 functional-384000 dockerd[5658]: time="2024-09-12T21:47:35.776894237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 21:47:35 functional-384000 dockerd[5652]: time="2024-09-12T21:47:35.866369690Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 12 21:47:36 functional-384000 dockerd[5658]: time="2024-09-12T21:47:36.174705022Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 12 21:47:36 functional-384000 dockerd[5658]: time="2024-09-12T21:47:36.174767679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 12 21:47:36 functional-384000 dockerd[5658]: time="2024-09-12T21:47:36.174781885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 21:47:36 functional-384000 dockerd[5658]: time="2024-09-12T21:47:36.174843708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 21:47:36 functional-384000 dockerd[5658]: time="2024-09-12T21:47:36.200619202Z" level=info msg="shim disconnected" id=0df32a8efdd6d86a5749cd0350b0ccd395e5400bc81bc5200390cb682771200e namespace=moby
	Sep 12 21:47:36 functional-384000 dockerd[5652]: time="2024-09-12T21:47:36.200757096Z" level=info msg="ignoring event" container=0df32a8efdd6d86a5749cd0350b0ccd395e5400bc81bc5200390cb682771200e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:47:36 functional-384000 dockerd[5658]: time="2024-09-12T21:47:36.200841999Z" level=warning msg="cleaning up after shim disconnected" id=0df32a8efdd6d86a5749cd0350b0ccd395e5400bc81bc5200390cb682771200e namespace=moby
	Sep 12 21:47:36 functional-384000 dockerd[5658]: time="2024-09-12T21:47:36.200852539Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 12 21:47:37 functional-384000 cri-dockerd[5927]: time="2024-09-12T21:47:37Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Status: Downloaded newer image for kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 12 21:47:37 functional-384000 dockerd[5658]: time="2024-09-12T21:47:37.619009027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 12 21:47:37 functional-384000 dockerd[5658]: time="2024-09-12T21:47:37.619057019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 12 21:47:37 functional-384000 dockerd[5658]: time="2024-09-12T21:47:37.619064726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 21:47:37 functional-384000 dockerd[5658]: time="2024-09-12T21:47:37.619101554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 21:47:38 functional-384000 dockerd[5658]: time="2024-09-12T21:47:38.190493100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 12 21:47:38 functional-384000 dockerd[5658]: time="2024-09-12T21:47:38.190701025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 12 21:47:38 functional-384000 dockerd[5658]: time="2024-09-12T21:47:38.190708524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 21:47:38 functional-384000 dockerd[5658]: time="2024-09-12T21:47:38.190736603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 21:47:38 functional-384000 dockerd[5652]: time="2024-09-12T21:47:38.214004329Z" level=info msg="ignoring event" container=024f2a1942be0ec421a4e6cf5fad163b136c00efe78a63f61eefdfbfe93ed103 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:47:38 functional-384000 dockerd[5658]: time="2024-09-12T21:47:38.214650226Z" level=info msg="shim disconnected" id=024f2a1942be0ec421a4e6cf5fad163b136c00efe78a63f61eefdfbfe93ed103 namespace=moby
	Sep 12 21:47:38 functional-384000 dockerd[5658]: time="2024-09-12T21:47:38.214681762Z" level=warning msg="cleaning up after shim disconnected" id=024f2a1942be0ec421a4e6cf5fad163b136c00efe78a63f61eefdfbfe93ed103 namespace=moby
	Sep 12 21:47:38 functional-384000 dockerd[5658]: time="2024-09-12T21:47:38.214685970Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 12 21:47:38 functional-384000 dockerd[5658]: time="2024-09-12T21:47:38.218743069Z" level=warning msg="cleanup warnings time=\"2024-09-12T21:47:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                  CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	024f2a1942be0       72565bf5bbedf                                                                                          6 seconds ago        Exited              echoserver-arm              2                   48b8d58d409bc       hello-node-64b4f8f9ff-rh5f2
	854bbcc0f61e5       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   7 seconds ago        Running             dashboard-metrics-scraper   0                   93c9fbba36f09       dashboard-metrics-scraper-c5db448b4-cx7v7
	0df32a8efdd6d       72565bf5bbedf                                                                                          8 seconds ago        Exited              echoserver-arm              2                   9cceb346a1f66       hello-node-connect-65d86f57f4-vc7hw
	f4b618d13d0e7       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         9 seconds ago        Running             kubernetes-dashboard        0                   520252a854bec       kubernetes-dashboard-695b96c756-dqgwq
	e5d1480bc6c37       nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3                          28 seconds ago       Running             myfrontend                  0                   5399229f1b90f       sp-pod
	4a3ede54c78a0       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    33 seconds ago       Exited              mount-munger                0                   e0e475d02a64f       busybox-mount
	01a2d76b2ad99       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                          42 seconds ago       Running             nginx                       0                   22c7532d0794c       nginx-svc
	29c8469557ed5       2f6c962e7b831                                                                                          About a minute ago   Running             coredns                     2                   3fda3c91443ff       coredns-7c65d6cfc9-vp6n8
	628809c637d42       ba04bb24b9575                                                                                          About a minute ago   Running             storage-provisioner         2                   91212670498f5       storage-provisioner
	12984ce9e9011       24a140c548c07                                                                                          About a minute ago   Running             kube-proxy                  2                   835fe57fe730c       kube-proxy-g4qlh
	5940bab20e79d       27e3830e14027                                                                                          About a minute ago   Running             etcd                        2                   6d4851666edeb       etcd-functional-384000
	6ea672d4ee046       279f381cb3736                                                                                          About a minute ago   Running             kube-controller-manager     2                   8932c766cdd3c       kube-controller-manager-functional-384000
	5e28c3b1657f7       7f8aa378bb47d                                                                                          About a minute ago   Running             kube-scheduler              2                   6f2fb99a05b59       kube-scheduler-functional-384000
	201611a55c1f2       d3f53a98c0a9d                                                                                          About a minute ago   Running             kube-apiserver              0                   c620d06ba0556       kube-apiserver-functional-384000
	c59b5c7b703ff       2f6c962e7b831                                                                                          2 minutes ago        Exited              coredns                     1                   de287576cb239       coredns-7c65d6cfc9-vp6n8
	53faa01db79a4       24a140c548c07                                                                                          2 minutes ago        Exited              kube-proxy                  1                   983715ed141bb       kube-proxy-g4qlh
	366343e68a4f1       ba04bb24b9575                                                                                          2 minutes ago        Exited              storage-provisioner         1                   1b5ad9ede582c       storage-provisioner
	6ddc42747539d       7f8aa378bb47d                                                                                          2 minutes ago        Exited              kube-scheduler              1                   b02bfb7814289       kube-scheduler-functional-384000
	fc70bb9d850db       27e3830e14027                                                                                          2 minutes ago        Exited              etcd                        1                   37b5a3a988cd0       etcd-functional-384000
	d38279e188b49       279f381cb3736                                                                                          2 minutes ago        Exited              kube-controller-manager     1                   24127f46acb90       kube-controller-manager-functional-384000
	
	
	==> coredns [29c8469557ed] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50408 - 46717 "HINFO IN 6472521954302110320.5940400712481196698. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010826467s
	[INFO] 10.244.0.1:54385 - 40920 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000100191s
	[INFO] 10.244.0.1:7267 - 50119 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000160931s
	[INFO] 10.244.0.1:16667 - 25235 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.000944133s
	[INFO] 10.244.0.1:42970 - 43148 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000024913s
	[INFO] 10.244.0.1:50148 - 36586 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000157515s
	[INFO] 10.244.0.1:19934 - 43240 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000290492s
	
	
	==> coredns [c59b5c7b703f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:34559 - 34373 "HINFO IN 250758401328423977.8554749670312435245. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.009999974s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-384000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-384000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=functional-384000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_12T14_45_09_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 21:45:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-384000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 21:47:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 21:47:28 +0000   Thu, 12 Sep 2024 21:45:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 21:47:28 +0000   Thu, 12 Sep 2024 21:45:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 21:47:28 +0000   Thu, 12 Sep 2024 21:45:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 21:47:28 +0000   Thu, 12 Sep 2024 21:45:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-384000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904752Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904752Ki
	  pods:               110
	System Info:
	  Machine ID:                 37499457ab15417b9179c22a99d77fa2
	  System UUID:                37499457ab15417b9179c22a99d77fa2
	  Boot ID:                    d5b93a81-c5fb-48e7-a3a9-078f511506d3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-rh5f2                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  default                     hello-node-connect-65d86f57f4-vc7hw          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 coredns-7c65d6cfc9-vp6n8                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m30s
	  kube-system                 etcd-functional-384000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m35s
	  kube-system                 kube-apiserver-functional-384000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-controller-manager-functional-384000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-proxy-g4qlh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-scheduler-functional-384000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-cx7v7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-dqgwq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m29s                kube-proxy       
	  Normal  Starting                 76s                  kube-proxy       
	  Normal  Starting                 2m3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m35s                kubelet          Node functional-384000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m35s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m35s                kubelet          Node functional-384000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m35s                kubelet          Node functional-384000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m35s                kubelet          Starting kubelet.
	  Normal  NodeReady                2m32s                kubelet          Node functional-384000 status is now: NodeReady
	  Normal  RegisteredNode           2m31s                node-controller  Node functional-384000 event: Registered Node functional-384000 in Controller
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node functional-384000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node functional-384000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m5s (x7 over 2m5s)  kubelet          Node functional-384000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m                   node-controller  Node functional-384000 event: Registered Node functional-384000 in Controller
	  Normal  Starting                 79s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  79s (x8 over 79s)    kubelet          Node functional-384000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    79s (x8 over 79s)    kubelet          Node functional-384000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     79s (x7 over 79s)    kubelet          Node functional-384000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  79s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           73s                  node-controller  Node functional-384000 event: Registered Node functional-384000 in Controller
	
	
	==> dmesg <==
	[  +2.438282] kauditd_printk_skb: 199 callbacks suppressed
	[  +8.487349] kauditd_printk_skb: 35 callbacks suppressed
	[  +9.140469] systemd-fstab-generator[4750]: Ignoring "noauto" option for root device
	[Sep12 21:46] systemd-fstab-generator[5184]: Ignoring "noauto" option for root device
	[  +0.054066] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.102511] systemd-fstab-generator[5217]: Ignoring "noauto" option for root device
	[  +0.101720] systemd-fstab-generator[5229]: Ignoring "noauto" option for root device
	[  +0.106554] systemd-fstab-generator[5243]: Ignoring "noauto" option for root device
	[  +5.127540] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.398910] systemd-fstab-generator[5876]: Ignoring "noauto" option for root device
	[  +0.074044] systemd-fstab-generator[5888]: Ignoring "noauto" option for root device
	[  +0.066511] systemd-fstab-generator[5900]: Ignoring "noauto" option for root device
	[  +0.084311] systemd-fstab-generator[5915]: Ignoring "noauto" option for root device
	[  +0.226439] systemd-fstab-generator[6086]: Ignoring "noauto" option for root device
	[  +1.152950] systemd-fstab-generator[6207]: Ignoring "noauto" option for root device
	[  +1.082826] kauditd_printk_skb: 194 callbacks suppressed
	[  +5.357100] kauditd_printk_skb: 36 callbacks suppressed
	[ +12.626635] systemd-fstab-generator[7277]: Ignoring "noauto" option for root device
	[  +6.406588] kauditd_printk_skb: 28 callbacks suppressed
	[  +8.198317] kauditd_printk_skb: 3 callbacks suppressed
	[Sep12 21:47] kauditd_printk_skb: 25 callbacks suppressed
	[ +11.526734] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.475924] kauditd_printk_skb: 23 callbacks suppressed
	[  +9.646480] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.451747] kauditd_printk_skb: 20 callbacks suppressed
	
	
	==> etcd [5940bab20e79] <==
	{"level":"info","ts":"2024-09-12T21:46:26.122210Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-09-12T21:46:26.122276Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T21:46:26.122307Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T21:46:26.123306Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T21:46:26.124094Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-12T21:46:26.124152Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-12T21:46:26.124206Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-12T21:46:26.125135Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-12T21:46:26.125175Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-12T21:46:27.212537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-12T21:46:27.212576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-12T21:46:27.212599Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-12T21:46:27.212609Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-09-12T21:46:27.212615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-12T21:46:27.212622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-09-12T21:46:27.212628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-12T21:46:27.213796Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-384000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-12T21:46:27.213815Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-12T21:46:27.213910Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-12T21:46:27.213925Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-12T21:46:27.213942Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-12T21:46:27.214559Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T21:46:27.214748Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T21:46:27.215255Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-12T21:46:27.215575Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [fc70bb9d850d] <==
	{"level":"info","ts":"2024-09-12T21:45:40.585954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-12T21:45:40.585995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-09-12T21:45:40.586025Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-09-12T21:45:40.586042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-12T21:45:40.586080Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-09-12T21:45:40.586138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-12T21:45:40.587829Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-384000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-12T21:45:40.587979Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-12T21:45:40.588157Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-12T21:45:40.588565Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T21:45:40.589035Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-12T21:45:40.589518Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T21:45:40.590122Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-12T21:45:40.602542Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-12T21:45:40.602602Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-12T21:46:10.973091Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-12T21:46:10.973114Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-384000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-09-12T21:46:10.973146Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-12T21:46:10.973186Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-12T21:46:10.982079Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-12T21:46:10.982131Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-12T21:46:10.982153Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-09-12T21:46:10.983532Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-12T21:46:10.983570Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-12T21:46:10.983574Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-384000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> kernel <==
	 21:47:44 up 2 min,  0 users,  load average: 0.71, 0.44, 0.18
	Linux functional-384000 5.10.207 #1 SMP PREEMPT Thu Sep 12 17:20:51 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [201611a55c1f] <==
	I0912 21:46:27.778309       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0912 21:46:27.778319       1 policy_source.go:224] refreshing policies
	I0912 21:46:27.782147       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0912 21:46:27.793474       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0912 21:46:27.793822       1 aggregator.go:171] initial CRD sync complete...
	I0912 21:46:27.793831       1 autoregister_controller.go:144] Starting autoregister controller
	I0912 21:46:27.793834       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0912 21:46:27.793837       1 cache.go:39] Caches are synced for autoregister controller
	I0912 21:46:27.804682       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0912 21:46:28.676661       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0912 21:46:29.251308       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0912 21:46:29.259398       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0912 21:46:29.274761       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0912 21:46:29.282005       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0912 21:46:29.283877       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0912 21:46:31.101089       1 controller.go:615] quota admission added evaluator for: endpoints
	I0912 21:46:31.454046       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0912 21:46:48.662704       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.245.116"}
	I0912 21:46:58.683657       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.108.125.237"}
	I0912 21:47:16.162938       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0912 21:47:16.206510       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.104.75.219"}
	I0912 21:47:21.999730       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.97.120.154"}
	I0912 21:47:30.226304       1 controller.go:615] quota admission added evaluator for: namespaces
	I0912 21:47:30.331436       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.196.229"}
	I0912 21:47:30.340732       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.117.16"}
	
	
	==> kube-controller-manager [6ea672d4ee04] <==
	I0912 21:47:30.255317       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="10.291203ms"
	E0912 21:47:30.255337       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0912 21:47:30.260137       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="9.321153ms"
	E0912 21:47:30.260237       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0912 21:47:30.260262       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="2.448602ms"
	E0912 21:47:30.260292       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0912 21:47:30.265383       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="2.436063ms"
	E0912 21:47:30.265400       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0912 21:47:30.266736       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="4.642163ms"
	E0912 21:47:30.266768       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0912 21:47:30.277808       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="9.878104ms"
	I0912 21:47:30.281513       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="3.681402ms"
	I0912 21:47:30.298212       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="16.673125ms"
	I0912 21:47:30.298359       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="32.786µs"
	I0912 21:47:30.306311       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="18.434839ms"
	I0912 21:47:30.327294       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="20.951638ms"
	I0912 21:47:30.334205       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.878508ms"
	I0912 21:47:30.334324       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="22.704µs"
	I0912 21:47:36.106431       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="4.4702ms"
	I0912 21:47:36.106509       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="29.662µs"
	I0912 21:47:37.111090       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="26.538µs"
	I0912 21:47:38.142056       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="70.947µs"
	I0912 21:47:38.177025       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.727671ms"
	I0912 21:47:38.177083       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="18.955µs"
	I0912 21:47:39.271730       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="29.203µs"
	
	
	==> kube-controller-manager [d38279e188b4] <==
	I0912 21:45:44.466889       1 shared_informer.go:320] Caches are synced for PVC protection
	I0912 21:45:44.468398       1 shared_informer.go:320] Caches are synced for ephemeral
	I0912 21:45:44.469458       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0912 21:45:44.469549       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0912 21:45:44.469640       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="41.722µs"
	I0912 21:45:44.477996       1 shared_informer.go:320] Caches are synced for node
	I0912 21:45:44.478062       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0912 21:45:44.478095       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0912 21:45:44.478125       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0912 21:45:44.478128       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0912 21:45:44.478183       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-384000"
	I0912 21:45:44.519907       1 shared_informer.go:320] Caches are synced for attach detach
	I0912 21:45:44.521157       1 shared_informer.go:320] Caches are synced for disruption
	I0912 21:45:44.626814       1 shared_informer.go:320] Caches are synced for resource quota
	I0912 21:45:44.669675       1 shared_informer.go:320] Caches are synced for resource quota
	I0912 21:45:44.669947       1 shared_informer.go:320] Caches are synced for taint
	I0912 21:45:44.670074       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0912 21:45:44.670182       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-384000"
	I0912 21:45:44.670261       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0912 21:45:44.717615       1 shared_informer.go:320] Caches are synced for daemon sets
	I0912 21:45:45.088117       1 shared_informer.go:320] Caches are synced for garbage collector
	I0912 21:45:45.165571       1 shared_informer.go:320] Caches are synced for garbage collector
	I0912 21:45:45.165617       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0912 21:45:49.973497       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="27.09915ms"
	I0912 21:45:49.973553       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="26.692µs"
	
	
	==> kube-proxy [12984ce9e901] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0912 21:46:28.631579       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0912 21:46:28.641374       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0912 21:46:28.641406       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0912 21:46:28.651222       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0912 21:46:28.651239       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 21:46:28.651252       1 server_linux.go:169] "Using iptables Proxier"
	I0912 21:46:28.651863       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0912 21:46:28.651956       1 server.go:483] "Version info" version="v1.31.1"
	I0912 21:46:28.651964       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 21:46:28.652395       1 config.go:199] "Starting service config controller"
	I0912 21:46:28.652407       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0912 21:46:28.652416       1 config.go:105] "Starting endpoint slice config controller"
	I0912 21:46:28.652419       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0912 21:46:28.652637       1 config.go:328] "Starting node config controller"
	I0912 21:46:28.652648       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0912 21:46:28.753106       1 shared_informer.go:320] Caches are synced for service config
	I0912 21:46:28.753106       1 shared_informer.go:320] Caches are synced for node config
	I0912 21:46:28.753119       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [53faa01db79a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0912 21:45:41.727973       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0912 21:45:41.732134       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0912 21:45:41.732162       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0912 21:45:41.739244       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0912 21:45:41.739259       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 21:45:41.739269       1 server_linux.go:169] "Using iptables Proxier"
	I0912 21:45:41.739832       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0912 21:45:41.739918       1 server.go:483] "Version info" version="v1.31.1"
	I0912 21:45:41.739927       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 21:45:41.740452       1 config.go:199] "Starting service config controller"
	I0912 21:45:41.740461       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0912 21:45:41.740469       1 config.go:105] "Starting endpoint slice config controller"
	I0912 21:45:41.740472       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0912 21:45:41.740626       1 config.go:328] "Starting node config controller"
	I0912 21:45:41.740635       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0912 21:45:41.840905       1 shared_informer.go:320] Caches are synced for node config
	I0912 21:45:41.840937       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0912 21:45:41.840918       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [5e28c3b1657f] <==
	I0912 21:46:26.951497       1 serving.go:386] Generated self-signed cert in-memory
	W0912 21:46:27.703575       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0912 21:46:27.703687       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0912 21:46:27.703710       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0912 21:46:27.703728       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0912 21:46:27.721626       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0912 21:46:27.721719       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 21:46:27.726213       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0912 21:46:27.730138       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0912 21:46:27.730417       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0912 21:46:27.730461       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0912 21:46:27.831035       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [6ddc42747539] <==
	I0912 21:45:40.423883       1 serving.go:386] Generated self-signed cert in-memory
	W0912 21:45:41.114587       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0912 21:45:41.114867       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0912 21:45:41.114885       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0912 21:45:41.114894       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0912 21:45:41.143095       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0912 21:45:41.143115       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 21:45:41.146259       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0912 21:45:41.146458       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0912 21:45:41.146489       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0912 21:45:41.146512       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0912 21:45:41.249912       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0912 21:46:10.986669       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 12 21:47:22 functional-384000 kubelet[6214]: I0912 21:47:22.880012    6214 scope.go:117] "RemoveContainer" containerID="40dde81e000caae6ea9159c7c6c3c58f133ecc664ebf8e7cde4de3c4fb824c20"
	Sep 12 21:47:22 functional-384000 kubelet[6214]: E0912 21:47:22.880100    6214 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 10s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-vc7hw_default(1bff9fcb-c996-4630-ac27-ff2ca00b3c31)\"" pod="default/hello-node-connect-65d86f57f4-vc7hw" podUID="1bff9fcb-c996-4630-ac27-ff2ca00b3c31"
	Sep 12 21:47:23 functional-384000 kubelet[6214]: I0912 21:47:23.910970    6214 scope.go:117] "RemoveContainer" containerID="b6d7a2d4f4b4b3d0f8eae35516aaf0476704c25c0d32d2aed7a0f27919dff3ec"
	Sep 12 21:47:23 functional-384000 kubelet[6214]: I0912 21:47:23.911336    6214 scope.go:117] "RemoveContainer" containerID="82dfda5b93270a263fbf9e8d8cc1956f498fa428a04fb5f304039918074934ac"
	Sep 12 21:47:23 functional-384000 kubelet[6214]: E0912 21:47:23.911485    6214 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 10s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-rh5f2_default(4a29721f-ddf2-4c3b-b13b-cd916dc732a5)\"" pod="default/hello-node-64b4f8f9ff-rh5f2" podUID="4a29721f-ddf2-4c3b-b13b-cd916dc732a5"
	Sep 12 21:47:25 functional-384000 kubelet[6214]: E0912 21:47:25.138630    6214 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 12 21:47:25 functional-384000 kubelet[6214]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 12 21:47:25 functional-384000 kubelet[6214]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 12 21:47:25 functional-384000 kubelet[6214]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 12 21:47:25 functional-384000 kubelet[6214]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 12 21:47:25 functional-384000 kubelet[6214]: I0912 21:47:25.213650    6214 scope.go:117] "RemoveContainer" containerID="adf88b8b7eb960a8f1f8ac2ae701f07b317f250e0942bc4b2ac7332ff334caa2"
	Sep 12 21:47:30 functional-384000 kubelet[6214]: I0912 21:47:30.410682    6214 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4ca294e0-fb77-481b-84d5-eeb7e0e22c72-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-cx7v7\" (UID: \"4ca294e0-fb77-481b-84d5-eeb7e0e22c72\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-cx7v7"
	Sep 12 21:47:30 functional-384000 kubelet[6214]: I0912 21:47:30.410712    6214 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/69b73caa-befb-46e2-9936-0c0e855b6c4e-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-dqgwq\" (UID: \"69b73caa-befb-46e2-9936-0c0e855b6c4e\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-dqgwq"
	Sep 12 21:47:30 functional-384000 kubelet[6214]: I0912 21:47:30.410739    6214 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvzrt\" (UniqueName: \"kubernetes.io/projected/4ca294e0-fb77-481b-84d5-eeb7e0e22c72-kube-api-access-pvzrt\") pod \"dashboard-metrics-scraper-c5db448b4-cx7v7\" (UID: \"4ca294e0-fb77-481b-84d5-eeb7e0e22c72\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-cx7v7"
	Sep 12 21:47:30 functional-384000 kubelet[6214]: I0912 21:47:30.410753    6214 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xldr8\" (UniqueName: \"kubernetes.io/projected/69b73caa-befb-46e2-9936-0c0e855b6c4e-kube-api-access-xldr8\") pod \"kubernetes-dashboard-695b96c756-dqgwq\" (UID: \"69b73caa-befb-46e2-9936-0c0e855b6c4e\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-dqgwq"
	Sep 12 21:47:36 functional-384000 kubelet[6214]: I0912 21:47:36.129954    6214 scope.go:117] "RemoveContainer" containerID="40dde81e000caae6ea9159c7c6c3c58f133ecc664ebf8e7cde4de3c4fb824c20"
	Sep 12 21:47:37 functional-384000 kubelet[6214]: I0912 21:47:37.104974    6214 scope.go:117] "RemoveContainer" containerID="40dde81e000caae6ea9159c7c6c3c58f133ecc664ebf8e7cde4de3c4fb824c20"
	Sep 12 21:47:37 functional-384000 kubelet[6214]: I0912 21:47:37.105093    6214 scope.go:117] "RemoveContainer" containerID="0df32a8efdd6d86a5749cd0350b0ccd395e5400bc81bc5200390cb682771200e"
	Sep 12 21:47:37 functional-384000 kubelet[6214]: E0912 21:47:37.105150    6214 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-vc7hw_default(1bff9fcb-c996-4630-ac27-ff2ca00b3c31)\"" pod="default/hello-node-connect-65d86f57f4-vc7hw" podUID="1bff9fcb-c996-4630-ac27-ff2ca00b3c31"
	Sep 12 21:47:37 functional-384000 kubelet[6214]: I0912 21:47:37.111583    6214 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-dqgwq" podStartSLOduration=2.155841564 podStartE2EDuration="7.11157147s" podCreationTimestamp="2024-09-12 21:47:30 +0000 UTC" firstStartedPulling="2024-09-12 21:47:30.691946622 +0000 UTC m=+65.624212036" lastFinishedPulling="2024-09-12 21:47:35.647676486 +0000 UTC m=+70.579941942" observedRunningTime="2024-09-12 21:47:36.103000682 +0000 UTC m=+71.035266138" watchObservedRunningTime="2024-09-12 21:47:37.11157147 +0000 UTC m=+72.043836925"
	Sep 12 21:47:38 functional-384000 kubelet[6214]: I0912 21:47:38.129456    6214 scope.go:117] "RemoveContainer" containerID="82dfda5b93270a263fbf9e8d8cc1956f498fa428a04fb5f304039918074934ac"
	Sep 12 21:47:38 functional-384000 kubelet[6214]: I0912 21:47:38.169887    6214 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-cx7v7" podStartSLOduration=1.32463271 podStartE2EDuration="8.169872907s" podCreationTimestamp="2024-09-12 21:47:30 +0000 UTC" firstStartedPulling="2024-09-12 21:47:30.714986046 +0000 UTC m=+65.647251502" lastFinishedPulling="2024-09-12 21:47:37.560226285 +0000 UTC m=+72.492491699" observedRunningTime="2024-09-12 21:47:38.169680813 +0000 UTC m=+73.101946268" watchObservedRunningTime="2024-09-12 21:47:38.169872907 +0000 UTC m=+73.102138363"
	Sep 12 21:47:39 functional-384000 kubelet[6214]: I0912 21:47:39.184678    6214 scope.go:117] "RemoveContainer" containerID="82dfda5b93270a263fbf9e8d8cc1956f498fa428a04fb5f304039918074934ac"
	Sep 12 21:47:39 functional-384000 kubelet[6214]: I0912 21:47:39.184772    6214 scope.go:117] "RemoveContainer" containerID="024f2a1942be0ec421a4e6cf5fad163b136c00efe78a63f61eefdfbfe93ed103"
	Sep 12 21:47:39 functional-384000 kubelet[6214]: E0912 21:47:39.184819    6214 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-rh5f2_default(4a29721f-ddf2-4c3b-b13b-cd916dc732a5)\"" pod="default/hello-node-64b4f8f9ff-rh5f2" podUID="4a29721f-ddf2-4c3b-b13b-cd916dc732a5"
	
	
	==> kubernetes-dashboard [f4b618d13d0e] <==
	2024/09/12 21:47:35 Using namespace: kubernetes-dashboard
	2024/09/12 21:47:35 Using in-cluster config to connect to apiserver
	2024/09/12 21:47:35 Using secret token for csrf signing
	2024/09/12 21:47:35 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/12 21:47:35 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/12 21:47:35 Successful initial request to the apiserver, version: v1.31.1
	2024/09/12 21:47:35 Generating JWE encryption key
	2024/09/12 21:47:35 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/12 21:47:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/12 21:47:35 Initializing JWE encryption key from synchronized object
	2024/09/12 21:47:35 Creating in-cluster Sidecar client
	2024/09/12 21:47:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/12 21:47:35 Serving insecurely on HTTP port: 9090
	2024/09/12 21:47:35 Starting overwatch
	
	
	==> storage-provisioner [366343e68a4f] <==
	I0912 21:45:41.676335       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0912 21:45:41.690734       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0912 21:45:41.690761       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0912 21:45:41.702038       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0912 21:45:41.702436       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-384000_ab912da5-7dd7-4498-ba38-c651dbefa72a!
	I0912 21:45:41.702374       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a85814d1-4607-4f83-8a07-55b59db42e1e", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-384000_ab912da5-7dd7-4498-ba38-c651dbefa72a became leader
	I0912 21:45:41.803122       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-384000_ab912da5-7dd7-4498-ba38-c651dbefa72a!
	
	
	==> storage-provisioner [628809c637d4] <==
	I0912 21:46:28.604909       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0912 21:46:28.611424       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0912 21:46:28.611441       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0912 21:46:46.022679       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0912 21:46:46.023571       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-384000_64eebdb6-c768-46b4-9c9e-dd480559512b!
	I0912 21:46:46.023321       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a85814d1-4607-4f83-8a07-55b59db42e1e", APIVersion:"v1", ResourceVersion:"597", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-384000_64eebdb6-c768-46b4-9c9e-dd480559512b became leader
	I0912 21:46:46.129620       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-384000_64eebdb6-c768-46b4-9c9e-dd480559512b!
	I0912 21:46:58.926577       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0912 21:46:58.926658       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    20f175a1-6563-4f20-a718-574c59ccbe6a 297 0 2024-09-12 21:45:14 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-12 21:45:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-e630469b-b373-49ad-bf87-0747ad7d96bb &PersistentVolumeClaim{ObjectMeta:{myclaim  default  e630469b-b373-49ad-bf87-0747ad7d96bb 652 0 2024-09-12 21:46:58 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-12 21:46:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-12 21:46:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0912 21:46:58.927067       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-e630469b-b373-49ad-bf87-0747ad7d96bb" provisioned
	I0912 21:46:58.927137       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0912 21:46:58.927158       1 volume_store.go:212] Trying to save persistentvolume "pvc-e630469b-b373-49ad-bf87-0747ad7d96bb"
	I0912 21:46:58.927423       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"e630469b-b373-49ad-bf87-0747ad7d96bb", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0912 21:46:58.931296       1 volume_store.go:219] persistentvolume "pvc-e630469b-b373-49ad-bf87-0747ad7d96bb" saved
	I0912 21:46:58.932633       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"e630469b-b373-49ad-bf87-0747ad7d96bb", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-e630469b-b373-49ad-bf87-0747ad7d96bb
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-384000 -n functional-384000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-384000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-384000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-384000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-384000/192.168.105.4
	Start Time:       Thu, 12 Sep 2024 14:47:09 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  docker://4a3ede54c78a04fac3cf522aed39c015b409fd2c53a2e2df3faeaa01714b2540
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 12 Sep 2024 14:47:11 -0700
	      Finished:     Thu, 12 Sep 2024 14:47:11 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8xmf4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-8xmf4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  35s   default-scheduler  Successfully assigned default/busybox-mount to functional-384000
	  Normal  Pulling    35s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     34s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.358s (1.358s including waiting). Image size: 3547125 bytes.
	  Normal  Created    34s   kubelet            Created container mount-munger
	  Normal  Started    34s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (29.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (214.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 node stop m02 -v=7 --alsologtostderr
E0912 14:52:03.004258    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/functional-384000/client.crt: no such file or directory" logger="UnhandledError"
E0912 14:52:06.953790    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-771000 node stop m02 -v=7 --alsologtostderr: (12.188792625s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr
E0912 14:52:13.247474    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/functional-384000/client.crt: no such file or directory" logger="UnhandledError"
E0912 14:52:33.730414    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/functional-384000/client.crt: no such file or directory" logger="UnhandledError"
E0912 14:52:34.682053    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt: no such file or directory" logger="UnhandledError"
E0912 14:53:14.692785    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/functional-384000/client.crt: no such file or directory" logger="UnhandledError"
E0912 14:54:36.613928    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/functional-384000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr: exit status 7 (2m55.978015541s)

                                                
                                                
-- stdout --
	ha-771000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-771000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-771000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-771000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 14:52:13.229945    3251 out.go:345] Setting OutFile to fd 1 ...
	I0912 14:52:13.230112    3251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 14:52:13.230116    3251 out.go:358] Setting ErrFile to fd 2...
	I0912 14:52:13.230119    3251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 14:52:13.230268    3251 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 14:52:13.230407    3251 out.go:352] Setting JSON to false
	I0912 14:52:13.230423    3251 mustload.go:65] Loading cluster: ha-771000
	I0912 14:52:13.230452    3251 notify.go:220] Checking for updates...
	I0912 14:52:13.230665    3251 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 14:52:13.230675    3251 status.go:255] checking status of ha-771000 ...
	I0912 14:52:13.231374    3251 status.go:330] ha-771000 host status = "Running" (err=<nil>)
	I0912 14:52:13.231380    3251 host.go:66] Checking if "ha-771000" exists ...
	I0912 14:52:13.231474    3251 host.go:66] Checking if "ha-771000" exists ...
	I0912 14:52:13.231586    3251 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 14:52:13.231595    3251 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000/id_rsa Username:docker}
	W0912 14:52:39.155611    3251 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0912 14:52:39.155752    3251 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0912 14:52:39.155789    3251 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0912 14:52:39.155800    3251 status.go:257] ha-771000 status: &{Name:ha-771000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0912 14:52:39.155818    3251 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0912 14:52:39.155829    3251 status.go:255] checking status of ha-771000-m02 ...
	I0912 14:52:39.156240    3251 status.go:330] ha-771000-m02 host status = "Stopped" (err=<nil>)
	I0912 14:52:39.156250    3251 status.go:343] host is not running, skipping remaining checks
	I0912 14:52:39.156255    3251 status.go:257] ha-771000-m02 status: &{Name:ha-771000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 14:52:39.156264    3251 status.go:255] checking status of ha-771000-m03 ...
	I0912 14:52:39.157483    3251 status.go:330] ha-771000-m03 host status = "Running" (err=<nil>)
	I0912 14:52:39.157495    3251 host.go:66] Checking if "ha-771000-m03" exists ...
	I0912 14:52:39.157753    3251 host.go:66] Checking if "ha-771000-m03" exists ...
	I0912 14:52:39.158012    3251 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 14:52:39.158025    3251 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000-m03/id_rsa Username:docker}
	W0912 14:53:54.158778    3251 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0912 14:53:54.158823    3251 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0912 14:53:54.158832    3251 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0912 14:53:54.158836    3251 status.go:257] ha-771000-m03 status: &{Name:ha-771000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0912 14:53:54.158844    3251 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0912 14:53:54.158848    3251 status.go:255] checking status of ha-771000-m04 ...
	I0912 14:53:54.159510    3251 status.go:330] ha-771000-m04 host status = "Running" (err=<nil>)
	I0912 14:53:54.159517    3251 host.go:66] Checking if "ha-771000-m04" exists ...
	I0912 14:53:54.159631    3251 host.go:66] Checking if "ha-771000-m04" exists ...
	I0912 14:53:54.159748    3251 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 14:53:54.159754    3251 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000-m04/id_rsa Username:docker}
	W0912 14:55:09.158815    3251 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0912 14:55:09.158865    3251 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0912 14:55:09.158876    3251 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0912 14:55:09.158880    3251 status.go:257] ha-771000-m04 status: &{Name:ha-771000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0912 14:55:09.158890    3251 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr": ha-771000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-771000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-771000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-771000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr": ha-771000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-771000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-771000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-771000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr": ha-771000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-771000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-771000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-771000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000: exit status 3 (25.961896042s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 14:55:35.120155    3283 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0912 14:55:35.120165    3283 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-771000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (214.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (102.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m16.8352825s)
ha_test.go:413: expected profile "ha-771000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-771000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-771000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-771000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000
E0912 14:56:52.729581    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/functional-384000/client.crt: no such file or directory" logger="UnhandledError"
E0912 14:57:06.946932    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000: exit status 3 (25.965916958s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 14:57:17.915606    3301 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0912 14:57:17.915649    3301 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-771000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (102.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (208.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 node start m02 -v=7 --alsologtostderr
E0912 14:57:20.452342    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/functional-384000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-771000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.138915333s)

                                                
                                                
-- stdout --
	* Starting "ha-771000-m02" control-plane node in "ha-771000" cluster
	* Restarting existing qemu2 VM for "ha-771000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-771000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 14:57:17.987707    3306 out.go:345] Setting OutFile to fd 1 ...
	I0912 14:57:17.988045    3306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 14:57:17.988050    3306 out.go:358] Setting ErrFile to fd 2...
	I0912 14:57:17.988053    3306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 14:57:17.988233    3306 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 14:57:17.988554    3306 mustload.go:65] Loading cluster: ha-771000
	I0912 14:57:17.988864    3306 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0912 14:57:17.989198    3306 host.go:58] "ha-771000-m02" host status: Stopped
	I0912 14:57:17.993634    3306 out.go:177] * Starting "ha-771000-m02" control-plane node in "ha-771000" cluster
	I0912 14:57:17.997580    3306 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 14:57:17.997598    3306 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 14:57:17.997606    3306 cache.go:56] Caching tarball of preloaded images
	I0912 14:57:17.997692    3306 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 14:57:17.997699    3306 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 14:57:17.997776    3306 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/ha-771000/config.json ...
	I0912 14:57:17.998316    3306 start.go:360] acquireMachinesLock for ha-771000-m02: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 14:57:17.998370    3306 start.go:364] duration metric: took 38.375µs to acquireMachinesLock for "ha-771000-m02"
	I0912 14:57:17.998381    3306 start.go:96] Skipping create...Using existing machine configuration
	I0912 14:57:17.998387    3306 fix.go:54] fixHost starting: m02
	I0912 14:57:17.998563    3306 fix.go:112] recreateIfNeeded on ha-771000-m02: state=Stopped err=<nil>
	W0912 14:57:17.998570    3306 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 14:57:18.003520    3306 out.go:177] * Restarting existing qemu2 VM for "ha-771000-m02" ...
	I0912 14:57:18.007581    3306 qemu.go:418] Using hvf for hardware acceleration
	I0912 14:57:18.007646    3306 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:2b:60:fa:f0:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000-m02/disk.qcow2
	I0912 14:57:18.011227    3306 main.go:141] libmachine: STDOUT: 
	I0912 14:57:18.011252    3306 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 14:57:18.011282    3306 fix.go:56] duration metric: took 12.895209ms for fixHost
	I0912 14:57:18.011286    3306 start.go:83] releasing machines lock for "ha-771000-m02", held for 12.911458ms
	W0912 14:57:18.011293    3306 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 14:57:18.011324    3306 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 14:57:18.011329    3306 start.go:729] Will try again in 5 seconds ...
	I0912 14:57:23.013477    3306 start.go:360] acquireMachinesLock for ha-771000-m02: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 14:57:23.013966    3306 start.go:364] duration metric: took 358.792µs to acquireMachinesLock for "ha-771000-m02"
	I0912 14:57:23.014095    3306 start.go:96] Skipping create...Using existing machine configuration
	I0912 14:57:23.014113    3306 fix.go:54] fixHost starting: m02
	I0912 14:57:23.014795    3306 fix.go:112] recreateIfNeeded on ha-771000-m02: state=Stopped err=<nil>
	W0912 14:57:23.014816    3306 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 14:57:23.022208    3306 out.go:177] * Restarting existing qemu2 VM for "ha-771000-m02" ...
	I0912 14:57:23.026181    3306 qemu.go:418] Using hvf for hardware acceleration
	I0912 14:57:23.026371    3306 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:2b:60:fa:f0:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000-m02/disk.qcow2
	I0912 14:57:23.033167    3306 main.go:141] libmachine: STDOUT: 
	I0912 14:57:23.033226    3306 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 14:57:23.033322    3306 fix.go:56] duration metric: took 19.211833ms for fixHost
	I0912 14:57:23.033339    3306 start.go:83] releasing machines lock for "ha-771000-m02", held for 19.3555ms
	W0912 14:57:23.033519    3306 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-771000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-771000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 14:57:23.038289    3306 out.go:201] 
	W0912 14:57:23.042316    3306 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 14:57:23.042360    3306 out.go:270] * 
	* 
	W0912 14:57:23.048988    3306 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 14:57:23.052170    3306 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0912 14:57:17.987707    3306 out.go:345] Setting OutFile to fd 1 ...
I0912 14:57:17.988045    3306 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 14:57:17.988050    3306 out.go:358] Setting ErrFile to fd 2...
I0912 14:57:17.988053    3306 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 14:57:17.988233    3306 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
I0912 14:57:17.988554    3306 mustload.go:65] Loading cluster: ha-771000
I0912 14:57:17.988864    3306 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
W0912 14:57:17.989198    3306 host.go:58] "ha-771000-m02" host status: Stopped
I0912 14:57:17.993634    3306 out.go:177] * Starting "ha-771000-m02" control-plane node in "ha-771000" cluster
I0912 14:57:17.997580    3306 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0912 14:57:17.997598    3306 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0912 14:57:17.997606    3306 cache.go:56] Caching tarball of preloaded images
I0912 14:57:17.997692    3306 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0912 14:57:17.997699    3306 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0912 14:57:17.997776    3306 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/ha-771000/config.json ...
I0912 14:57:17.998316    3306 start.go:360] acquireMachinesLock for ha-771000-m02: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0912 14:57:17.998370    3306 start.go:364] duration metric: took 38.375µs to acquireMachinesLock for "ha-771000-m02"
I0912 14:57:17.998381    3306 start.go:96] Skipping create...Using existing machine configuration
I0912 14:57:17.998387    3306 fix.go:54] fixHost starting: m02
I0912 14:57:17.998563    3306 fix.go:112] recreateIfNeeded on ha-771000-m02: state=Stopped err=<nil>
W0912 14:57:17.998570    3306 fix.go:138] unexpected machine state, will restart: <nil>
I0912 14:57:18.003520    3306 out.go:177] * Restarting existing qemu2 VM for "ha-771000-m02" ...
I0912 14:57:18.007581    3306 qemu.go:418] Using hvf for hardware acceleration
I0912 14:57:18.007646    3306 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:2b:60:fa:f0:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000-m02/disk.qcow2
I0912 14:57:18.011227    3306 main.go:141] libmachine: STDOUT: 
I0912 14:57:18.011252    3306 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0912 14:57:18.011282    3306 fix.go:56] duration metric: took 12.895209ms for fixHost
I0912 14:57:18.011286    3306 start.go:83] releasing machines lock for "ha-771000-m02", held for 12.911458ms
W0912 14:57:18.011293    3306 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0912 14:57:18.011324    3306 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0912 14:57:18.011329    3306 start.go:729] Will try again in 5 seconds ...
I0912 14:57:23.013477    3306 start.go:360] acquireMachinesLock for ha-771000-m02: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0912 14:57:23.013966    3306 start.go:364] duration metric: took 358.792µs to acquireMachinesLock for "ha-771000-m02"
I0912 14:57:23.014095    3306 start.go:96] Skipping create...Using existing machine configuration
I0912 14:57:23.014113    3306 fix.go:54] fixHost starting: m02
I0912 14:57:23.014795    3306 fix.go:112] recreateIfNeeded on ha-771000-m02: state=Stopped err=<nil>
W0912 14:57:23.014816    3306 fix.go:138] unexpected machine state, will restart: <nil>
I0912 14:57:23.022208    3306 out.go:177] * Restarting existing qemu2 VM for "ha-771000-m02" ...
I0912 14:57:23.026181    3306 qemu.go:418] Using hvf for hardware acceleration
I0912 14:57:23.026371    3306 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:2b:60:fa:f0:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000-m02/disk.qcow2
I0912 14:57:23.033167    3306 main.go:141] libmachine: STDOUT: 
I0912 14:57:23.033226    3306 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0912 14:57:23.033322    3306 fix.go:56] duration metric: took 19.211833ms for fixHost
I0912 14:57:23.033339    3306 start.go:83] releasing machines lock for "ha-771000-m02", held for 19.3555ms
W0912 14:57:23.033519    3306 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-771000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-771000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0912 14:57:23.038289    3306 out.go:201] 
W0912 14:57:23.042316    3306 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0912 14:57:23.042360    3306 out.go:270] * 
* 
W0912 14:57:23.048988    3306 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0912 14:57:23.052170    3306 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-771000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr: exit status 7 (2m57.831785125s)

                                                
                                                
-- stdout --
	ha-771000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-771000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-771000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-771000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 14:57:23.119325    3313 out.go:345] Setting OutFile to fd 1 ...
	I0912 14:57:23.119527    3313 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 14:57:23.119531    3313 out.go:358] Setting ErrFile to fd 2...
	I0912 14:57:23.119534    3313 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 14:57:23.119705    3313 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 14:57:23.119861    3313 out.go:352] Setting JSON to false
	I0912 14:57:23.119876    3313 mustload.go:65] Loading cluster: ha-771000
	I0912 14:57:23.119917    3313 notify.go:220] Checking for updates...
	I0912 14:57:23.120172    3313 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 14:57:23.120181    3313 status.go:255] checking status of ha-771000 ...
	I0912 14:57:23.121044    3313 status.go:330] ha-771000 host status = "Running" (err=<nil>)
	I0912 14:57:23.121061    3313 host.go:66] Checking if "ha-771000" exists ...
	I0912 14:57:23.121183    3313 host.go:66] Checking if "ha-771000" exists ...
	I0912 14:57:23.121314    3313 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 14:57:23.121323    3313 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000/id_rsa Username:docker}
	W0912 14:57:23.121522    3313 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0912 14:57:23.121540    3313 retry.go:31] will retry after 256.74177ms: dial tcp 192.168.105.5:22: connect: host is down
	W0912 14:57:23.380556    3313 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0912 14:57:23.380589    3313 retry.go:31] will retry after 522.2685ms: dial tcp 192.168.105.5:22: connect: host is down
	W0912 14:57:23.905439    3313 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0912 14:57:23.905516    3313 retry.go:31] will retry after 358.734549ms: dial tcp 192.168.105.5:22: connect: host is down
	W0912 14:57:24.266925    3313 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0912 14:57:24.267015    3313 retry.go:31] will retry after 680.454436ms: dial tcp 192.168.105.5:22: connect: host is down
	W0912 14:57:50.875518    3313 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0912 14:57:50.875588    3313 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0912 14:57:50.875598    3313 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0912 14:57:50.875603    3313 status.go:257] ha-771000 status: &{Name:ha-771000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0912 14:57:50.875614    3313 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0912 14:57:50.875617    3313 status.go:255] checking status of ha-771000-m02 ...
	I0912 14:57:50.875830    3313 status.go:330] ha-771000-m02 host status = "Stopped" (err=<nil>)
	I0912 14:57:50.875839    3313 status.go:343] host is not running, skipping remaining checks
	I0912 14:57:50.875841    3313 status.go:257] ha-771000-m02 status: &{Name:ha-771000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 14:57:50.875846    3313 status.go:255] checking status of ha-771000-m03 ...
	I0912 14:57:50.876443    3313 status.go:330] ha-771000-m03 host status = "Running" (err=<nil>)
	I0912 14:57:50.876447    3313 host.go:66] Checking if "ha-771000-m03" exists ...
	I0912 14:57:50.876538    3313 host.go:66] Checking if "ha-771000-m03" exists ...
	I0912 14:57:50.876658    3313 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 14:57:50.876667    3313 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000-m03/id_rsa Username:docker}
	W0912 14:59:05.876827    3313 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0912 14:59:05.876998    3313 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0912 14:59:05.877036    3313 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0912 14:59:05.877060    3313 status.go:257] ha-771000-m03 status: &{Name:ha-771000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0912 14:59:05.877113    3313 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0912 14:59:05.877135    3313 status.go:255] checking status of ha-771000-m04 ...
	I0912 14:59:05.880160    3313 status.go:330] ha-771000-m04 host status = "Running" (err=<nil>)
	I0912 14:59:05.880191    3313 host.go:66] Checking if "ha-771000-m04" exists ...
	I0912 14:59:05.880718    3313 host.go:66] Checking if "ha-771000-m04" exists ...
	I0912 14:59:05.881285    3313 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 14:59:05.881317    3313 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000-m04/id_rsa Username:docker}
	W0912 15:00:20.881726    3313 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0912 15:00:20.881918    3313 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0912 15:00:20.881959    3313 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0912 15:00:20.881980    3313 status.go:257] ha-771000-m04 status: &{Name:ha-771000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0912 15:00:20.882032    3313 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000: exit status 3 (25.993334166s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 15:00:46.877459    3658 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0912 15:00:46.877498    3658 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-771000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (208.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-771000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-771000 -v=7 --alsologtostderr
E0912 15:02:06.938720    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt: no such file or directory" logger="UnhandledError"
E0912 15:03:30.027700    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-771000 -v=7 --alsologtostderr: (3m49.029488333s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-771000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-771000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.226799667s)

                                                
                                                
-- stdout --
	* [ha-771000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-771000" primary control-plane node in "ha-771000" cluster
	* Restarting existing qemu2 VM for "ha-771000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-771000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:05:55.449772    3830 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:05:55.449935    3830 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:05:55.449943    3830 out.go:358] Setting ErrFile to fd 2...
	I0912 15:05:55.449946    3830 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:05:55.450127    3830 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:05:55.451417    3830 out.go:352] Setting JSON to false
	I0912 15:05:55.470691    3830 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3919,"bootTime":1726174836,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:05:55.470761    3830 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:05:55.476479    3830 out.go:177] * [ha-771000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:05:55.484444    3830 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:05:55.484472    3830 notify.go:220] Checking for updates...
	I0912 15:05:55.492420    3830 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:05:55.496467    3830 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:05:55.499501    3830 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:05:55.502485    3830 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:05:55.505402    3830 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:05:55.508793    3830 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:05:55.508846    3830 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:05:55.513364    3830 out.go:177] * Using the qemu2 driver based on existing profile
	I0912 15:05:55.520456    3830 start.go:297] selected driver: qemu2
	I0912 15:05:55.520462    3830 start.go:901] validating driver "qemu2" against &{Name:ha-771000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-771000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:05:55.520537    3830 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:05:55.523359    3830 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:05:55.523391    3830 cni.go:84] Creating CNI manager for ""
	I0912 15:05:55.523396    3830 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0912 15:05:55.523460    3830 start.go:340] cluster config:
	{Name:ha-771000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-771000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:05:55.527573    3830 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:05:55.536420    3830 out.go:177] * Starting "ha-771000" primary control-plane node in "ha-771000" cluster
	I0912 15:05:55.540259    3830 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 15:05:55.540272    3830 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 15:05:55.540277    3830 cache.go:56] Caching tarball of preloaded images
	I0912 15:05:55.540335    3830 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:05:55.540340    3830 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 15:05:55.540410    3830 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/ha-771000/config.json ...
	I0912 15:05:55.540963    3830 start.go:360] acquireMachinesLock for ha-771000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:05:55.540999    3830 start.go:364] duration metric: took 29.833µs to acquireMachinesLock for "ha-771000"
	I0912 15:05:55.541010    3830 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:05:55.541015    3830 fix.go:54] fixHost starting: 
	I0912 15:05:55.541157    3830 fix.go:112] recreateIfNeeded on ha-771000: state=Stopped err=<nil>
	W0912 15:05:55.541164    3830 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 15:05:55.545424    3830 out.go:177] * Restarting existing qemu2 VM for "ha-771000" ...
	I0912 15:05:55.552464    3830 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:05:55.552512    3830 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:53:9f:15:39:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000/disk.qcow2
	I0912 15:05:55.554637    3830 main.go:141] libmachine: STDOUT: 
	I0912 15:05:55.554658    3830 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:05:55.554687    3830 fix.go:56] duration metric: took 13.672209ms for fixHost
	I0912 15:05:55.554691    3830 start.go:83] releasing machines lock for "ha-771000", held for 13.68825ms
	W0912 15:05:55.554698    3830 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:05:55.554734    3830 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:05:55.554739    3830 start.go:729] Will try again in 5 seconds ...
	I0912 15:06:00.556807    3830 start.go:360] acquireMachinesLock for ha-771000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:06:00.557308    3830 start.go:364] duration metric: took 388.084µs to acquireMachinesLock for "ha-771000"
	I0912 15:06:00.557442    3830 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:06:00.557463    3830 fix.go:54] fixHost starting: 
	I0912 15:06:00.558168    3830 fix.go:112] recreateIfNeeded on ha-771000: state=Stopped err=<nil>
	W0912 15:06:00.558195    3830 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 15:06:00.562817    3830 out.go:177] * Restarting existing qemu2 VM for "ha-771000" ...
	I0912 15:06:00.570581    3830 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:06:00.570750    3830 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:53:9f:15:39:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000/disk.qcow2
	I0912 15:06:00.580366    3830 main.go:141] libmachine: STDOUT: 
	I0912 15:06:00.580426    3830 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:06:00.580530    3830 fix.go:56] duration metric: took 23.071ms for fixHost
	I0912 15:06:00.580545    3830 start.go:83] releasing machines lock for "ha-771000", held for 23.209625ms
	W0912 15:06:00.580722    3830 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-771000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-771000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:06:00.588524    3830 out.go:201] 
	W0912 15:06:00.592803    3830 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:06:00.592842    3830 out.go:270] * 
	* 
	W0912 15:06:00.595385    3830 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:06:00.603731    3830 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-771000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-771000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000: exit status 7 (32.538167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-771000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-771000 node delete m03 -v=7 --alsologtostderr: exit status 83 (43.833375ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-771000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-771000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:06:00.744316    3845 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:06:00.744609    3845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:06:00.744612    3845 out.go:358] Setting ErrFile to fd 2...
	I0912 15:06:00.744614    3845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:06:00.744768    3845 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:06:00.744986    3845 mustload.go:65] Loading cluster: ha-771000
	I0912 15:06:00.745225    3845 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0912 15:06:00.745543    3845 out.go:270] ! The control-plane node ha-771000 host is not running (will try others): state=Stopped
	! The control-plane node ha-771000 host is not running (will try others): state=Stopped
	W0912 15:06:00.745648    3845 out.go:270] ! The control-plane node ha-771000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-771000-m02 host is not running (will try others): state=Stopped
	I0912 15:06:00.750986    3845 out.go:177] * The control-plane node ha-771000-m03 host is not running: state=Stopped
	I0912 15:06:00.753949    3845 out.go:177]   To start a cluster, run: "minikube start -p ha-771000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-771000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr: exit status 7 (30.06575ms)

                                                
                                                
-- stdout --
	ha-771000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-771000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-771000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-771000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:06:00.786923    3847 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:06:00.787067    3847 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:06:00.787070    3847 out.go:358] Setting ErrFile to fd 2...
	I0912 15:06:00.787072    3847 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:06:00.787227    3847 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:06:00.787340    3847 out.go:352] Setting JSON to false
	I0912 15:06:00.787358    3847 mustload.go:65] Loading cluster: ha-771000
	I0912 15:06:00.787409    3847 notify.go:220] Checking for updates...
	I0912 15:06:00.787587    3847 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:06:00.787593    3847 status.go:255] checking status of ha-771000 ...
	I0912 15:06:00.787798    3847 status.go:330] ha-771000 host status = "Stopped" (err=<nil>)
	I0912 15:06:00.787801    3847 status.go:343] host is not running, skipping remaining checks
	I0912 15:06:00.787803    3847 status.go:257] ha-771000 status: &{Name:ha-771000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 15:06:00.787813    3847 status.go:255] checking status of ha-771000-m02 ...
	I0912 15:06:00.787901    3847 status.go:330] ha-771000-m02 host status = "Stopped" (err=<nil>)
	I0912 15:06:00.787904    3847 status.go:343] host is not running, skipping remaining checks
	I0912 15:06:00.787905    3847 status.go:257] ha-771000-m02 status: &{Name:ha-771000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 15:06:00.787909    3847 status.go:255] checking status of ha-771000-m03 ...
	I0912 15:06:00.787994    3847 status.go:330] ha-771000-m03 host status = "Stopped" (err=<nil>)
	I0912 15:06:00.787996    3847 status.go:343] host is not running, skipping remaining checks
	I0912 15:06:00.787998    3847 status.go:257] ha-771000-m03 status: &{Name:ha-771000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 15:06:00.788002    3847 status.go:255] checking status of ha-771000-m04 ...
	I0912 15:06:00.788106    3847 status.go:330] ha-771000-m04 host status = "Stopped" (err=<nil>)
	I0912 15:06:00.788112    3847 status.go:343] host is not running, skipping remaining checks
	I0912 15:06:00.788114    3847 status.go:257] ha-771000-m04 status: &{Name:ha-771000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000: exit status 7 (30.638459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-771000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-771000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-771000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-771000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-771000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000: exit status 7 (51.239417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-771000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (202.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 stop -v=7 --alsologtostderr
E0912 15:06:52.712900    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/functional-384000/client.crt: no such file or directory" logger="UnhandledError"
E0912 15:07:06.929756    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt: no such file or directory" logger="UnhandledError"
E0912 15:08:15.797670    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/functional-384000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-771000 stop -v=7 --alsologtostderr: (3m21.991261459s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr: exit status 7 (67.265958ms)

                                                
                                                
-- stdout --
	ha-771000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-771000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-771000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-771000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:09:23.891412    3900 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:09:23.891600    3900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:09:23.891605    3900 out.go:358] Setting ErrFile to fd 2...
	I0912 15:09:23.891609    3900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:09:23.891771    3900 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:09:23.891928    3900 out.go:352] Setting JSON to false
	I0912 15:09:23.891944    3900 mustload.go:65] Loading cluster: ha-771000
	I0912 15:09:23.891985    3900 notify.go:220] Checking for updates...
	I0912 15:09:23.892238    3900 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:09:23.892246    3900 status.go:255] checking status of ha-771000 ...
	I0912 15:09:23.892513    3900 status.go:330] ha-771000 host status = "Stopped" (err=<nil>)
	I0912 15:09:23.892517    3900 status.go:343] host is not running, skipping remaining checks
	I0912 15:09:23.892520    3900 status.go:257] ha-771000 status: &{Name:ha-771000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 15:09:23.892532    3900 status.go:255] checking status of ha-771000-m02 ...
	I0912 15:09:23.892657    3900 status.go:330] ha-771000-m02 host status = "Stopped" (err=<nil>)
	I0912 15:09:23.892661    3900 status.go:343] host is not running, skipping remaining checks
	I0912 15:09:23.892663    3900 status.go:257] ha-771000-m02 status: &{Name:ha-771000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 15:09:23.892668    3900 status.go:255] checking status of ha-771000-m03 ...
	I0912 15:09:23.892788    3900 status.go:330] ha-771000-m03 host status = "Stopped" (err=<nil>)
	I0912 15:09:23.892792    3900 status.go:343] host is not running, skipping remaining checks
	I0912 15:09:23.892794    3900 status.go:257] ha-771000-m03 status: &{Name:ha-771000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 15:09:23.892798    3900 status.go:255] checking status of ha-771000-m04 ...
	I0912 15:09:23.892923    3900 status.go:330] ha-771000-m04 host status = "Stopped" (err=<nil>)
	I0912 15:09:23.892928    3900 status.go:343] host is not running, skipping remaining checks
	I0912 15:09:23.892930    3900 status.go:257] ha-771000-m04 status: &{Name:ha-771000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr": ha-771000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-771000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-771000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-771000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr": ha-771000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-771000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-771000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-771000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr": ha-771000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-771000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-771000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-771000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000: exit status 7 (32.199458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-771000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (202.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-771000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-771000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.182181458s)

                                                
                                                
-- stdout --
	* [ha-771000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-771000" primary control-plane node in "ha-771000" cluster
	* Restarting existing qemu2 VM for "ha-771000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-771000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:09:23.954482    3904 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:09:23.954611    3904 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:09:23.954615    3904 out.go:358] Setting ErrFile to fd 2...
	I0912 15:09:23.954617    3904 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:09:23.954774    3904 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:09:23.955766    3904 out.go:352] Setting JSON to false
	I0912 15:09:23.971650    3904 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4127,"bootTime":1726174836,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:09:23.971725    3904 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:09:23.976912    3904 out.go:177] * [ha-771000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:09:23.983817    3904 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:09:23.983870    3904 notify.go:220] Checking for updates...
	I0912 15:09:23.990821    3904 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:09:23.993793    3904 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:09:23.996886    3904 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:09:23.999845    3904 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:09:24.002831    3904 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:09:24.006132    3904 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:09:24.006382    3904 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:09:24.010835    3904 out.go:177] * Using the qemu2 driver based on existing profile
	I0912 15:09:24.017816    3904 start.go:297] selected driver: qemu2
	I0912 15:09:24.017822    3904 start.go:901] validating driver "qemu2" against &{Name:ha-771000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-771000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:09:24.017921    3904 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:09:24.020168    3904 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:09:24.020211    3904 cni.go:84] Creating CNI manager for ""
	I0912 15:09:24.020216    3904 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0912 15:09:24.020259    3904 start.go:340] cluster config:
	{Name:ha-771000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-771000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:09:24.023872    3904 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:09:24.030834    3904 out.go:177] * Starting "ha-771000" primary control-plane node in "ha-771000" cluster
	I0912 15:09:24.034789    3904 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 15:09:24.034806    3904 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 15:09:24.034816    3904 cache.go:56] Caching tarball of preloaded images
	I0912 15:09:24.034885    3904 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:09:24.034892    3904 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 15:09:24.034963    3904 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/ha-771000/config.json ...
	I0912 15:09:24.035414    3904 start.go:360] acquireMachinesLock for ha-771000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:09:24.035449    3904 start.go:364] duration metric: took 28.792µs to acquireMachinesLock for "ha-771000"
	I0912 15:09:24.035460    3904 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:09:24.035464    3904 fix.go:54] fixHost starting: 
	I0912 15:09:24.035576    3904 fix.go:112] recreateIfNeeded on ha-771000: state=Stopped err=<nil>
	W0912 15:09:24.035584    3904 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 15:09:24.039855    3904 out.go:177] * Restarting existing qemu2 VM for "ha-771000" ...
	I0912 15:09:24.046727    3904 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:09:24.046770    3904 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:53:9f:15:39:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000/disk.qcow2
	I0912 15:09:24.048768    3904 main.go:141] libmachine: STDOUT: 
	I0912 15:09:24.048787    3904 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:09:24.048816    3904 fix.go:56] duration metric: took 13.351875ms for fixHost
	I0912 15:09:24.048820    3904 start.go:83] releasing machines lock for "ha-771000", held for 13.367292ms
	W0912 15:09:24.048827    3904 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:09:24.048864    3904 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:09:24.048869    3904 start.go:729] Will try again in 5 seconds ...
	I0912 15:09:29.050948    3904 start.go:360] acquireMachinesLock for ha-771000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:09:29.051290    3904 start.go:364] duration metric: took 271.334µs to acquireMachinesLock for "ha-771000"
	I0912 15:09:29.051442    3904 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:09:29.051459    3904 fix.go:54] fixHost starting: 
	I0912 15:09:29.052113    3904 fix.go:112] recreateIfNeeded on ha-771000: state=Stopped err=<nil>
	W0912 15:09:29.052139    3904 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 15:09:29.056357    3904 out.go:177] * Restarting existing qemu2 VM for "ha-771000" ...
	I0912 15:09:29.067494    3904 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:09:29.067745    3904 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:53:9f:15:39:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/ha-771000/disk.qcow2
	I0912 15:09:29.076474    3904 main.go:141] libmachine: STDOUT: 
	I0912 15:09:29.076542    3904 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:09:29.076608    3904 fix.go:56] duration metric: took 25.151375ms for fixHost
	I0912 15:09:29.076623    3904 start.go:83] releasing machines lock for "ha-771000", held for 25.312875ms
	W0912 15:09:29.076819    3904 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-771000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-771000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:09:29.084484    3904 out.go:201] 
	W0912 15:09:29.088570    3904 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:09:29.088613    3904 out.go:270] * 
	* 
	W0912 15:09:29.091149    3904 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:09:29.100474    3904 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-771000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000: exit status 7 (69.949584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-771000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-771000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-771000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-771000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-771000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000: exit status 7 (29.224958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-771000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-771000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-771000 --control-plane -v=7 --alsologtostderr: exit status 83 (40.234792ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-771000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-771000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:09:29.288326    3919 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:09:29.288452    3919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:09:29.288456    3919 out.go:358] Setting ErrFile to fd 2...
	I0912 15:09:29.288458    3919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:09:29.288578    3919 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:09:29.288794    3919 mustload.go:65] Loading cluster: ha-771000
	I0912 15:09:29.289005    3919 config.go:182] Loaded profile config "ha-771000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0912 15:09:29.289307    3919 out.go:270] ! The control-plane node ha-771000 host is not running (will try others): state=Stopped
	! The control-plane node ha-771000 host is not running (will try others): state=Stopped
	W0912 15:09:29.289418    3919 out.go:270] ! The control-plane node ha-771000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-771000-m02 host is not running (will try others): state=Stopped
	I0912 15:09:29.293257    3919 out.go:177] * The control-plane node ha-771000-m03 host is not running: state=Stopped
	I0912 15:09:29.296126    3919 out.go:177]   To start a cluster, run: "minikube start -p ha-771000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-771000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-771000 -n ha-771000: exit status 7 (29.675459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-771000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.01s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-377000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-377000 --driver=qemu2 : exit status 80 (9.941215666s)

                                                
                                                
-- stdout --
	* [image-377000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-377000" primary control-plane node in "image-377000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-377000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-377000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-377000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-377000 -n image-377000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-377000 -n image-377000: exit status 7 (67.231042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-377000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.01s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.83s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-722000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-722000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.828638417s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f74746b9-f0b6-4b5e-b17e-5b159ccfcd55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-722000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b32b3068-512b-4761-a472-58b40fca99d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19616"}}
	{"specversion":"1.0","id":"3f784e6d-11a9-4159-9bc8-3d2760f33bf3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig"}}
	{"specversion":"1.0","id":"01fce668-f06b-4df3-b4f3-42f5ffe073b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"30bb045d-9697-43fb-a8a9-4a3e52e09ebe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2088ef7a-8d31-4074-8bc2-bde398a908d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube"}}
	{"specversion":"1.0","id":"c90cb864-95b5-4d81-b1ca-2cb987deb398","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8ab97ed3-58aa-4888-ab05-a664287ac53e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2d501590-19ab-43a0-a3a3-d9b389c670c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"1eebb5cd-1a55-41d0-9e01-a9451eebf887","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-722000\" primary control-plane node in \"json-output-722000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"01c38817-1220-49da-85a5-526f7a3fbe23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"21a9e5ea-7847-448b-a468-aedc5efd8c80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-722000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"e98a09d7-dcac-492c-9cab-6903765223f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"172c89e3-e8c0-4ac6-b23e-be4b8ffc46bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"222e41bd-6487-40cf-a064-550bac9e565f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-722000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"a6c9c9b5-13bd-4bd8-bb11-7958092cbe90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"58edd775-b6d1-4677-b428-775fac2f3390","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-722000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.83s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-722000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-722000 --output=json --user=testUser: exit status 83 (75.484ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c2e733a7-ea2b-4f32-ad51-c905696e04a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-722000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"30abf93d-57f1-49b8-b720-b240812e2f87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-722000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-722000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-722000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-722000 --output=json --user=testUser: exit status 83 (42.790042ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-722000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-722000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-722000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-722000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.42s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-842000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-842000 --driver=qemu2 : exit status 80 (10.128636625s)

                                                
                                                
-- stdout --
	* [first-842000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-842000" primary control-plane node in "first-842000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-842000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-842000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-09-12 15:10:03.665737 -0700 PDT m=+2524.120055876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-843000 -n second-843000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-843000 -n second-843000: exit status 85 (79.750917ms)

                                                
                                                
-- stdout --
	* Profile "second-843000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-843000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-843000" host is not running, skipping log retrieval (state="* Profile \"second-843000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-843000\"")
helpers_test.go:175: Cleaning up "second-843000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-843000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-09-12 15:10:03.849352 -0700 PDT m=+2524.303674959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-842000 -n first-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-842000 -n first-842000: exit status 7 (29.649959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-842000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-842000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-842000
--- FAIL: TestMinikubeProfile (10.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.11s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-349000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-349000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.039140666s)

                                                
                                                
-- stdout --
	* [mount-start-1-349000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-349000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-349000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-349000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-349000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-349000 -n mount-start-1-349000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-349000 -n mount-start-1-349000: exit status 7 (69.327208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-349000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.11s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-323000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-323000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.941943375s)

                                                
                                                
-- stdout --
	* [multinode-323000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-323000" primary control-plane node in "multinode-323000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-323000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:10:14.274955    4072 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:10:14.275085    4072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:10:14.275089    4072 out.go:358] Setting ErrFile to fd 2...
	I0912 15:10:14.275091    4072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:10:14.275215    4072 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:10:14.276324    4072 out.go:352] Setting JSON to false
	I0912 15:10:14.292221    4072 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4178,"bootTime":1726174836,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:10:14.292286    4072 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:10:14.299714    4072 out.go:177] * [multinode-323000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:10:14.308517    4072 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:10:14.308580    4072 notify.go:220] Checking for updates...
	I0912 15:10:14.316406    4072 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:10:14.319511    4072 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:10:14.322474    4072 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:10:14.325457    4072 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:10:14.328436    4072 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:10:14.331605    4072 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:10:14.336430    4072 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:10:14.343503    4072 start.go:297] selected driver: qemu2
	I0912 15:10:14.343517    4072 start.go:901] validating driver "qemu2" against <nil>
	I0912 15:10:14.343524    4072 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:10:14.345697    4072 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 15:10:14.349383    4072 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:10:14.352482    4072 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:10:14.352498    4072 cni.go:84] Creating CNI manager for ""
	I0912 15:10:14.352502    4072 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0912 15:10:14.352508    4072 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0912 15:10:14.352537    4072 start.go:340] cluster config:
	{Name:multinode-323000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-323000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:10:14.356104    4072 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:10:14.362441    4072 out.go:177] * Starting "multinode-323000" primary control-plane node in "multinode-323000" cluster
	I0912 15:10:14.366457    4072 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 15:10:14.366480    4072 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 15:10:14.366487    4072 cache.go:56] Caching tarball of preloaded images
	I0912 15:10:14.366536    4072 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:10:14.366542    4072 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 15:10:14.366737    4072 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/multinode-323000/config.json ...
	I0912 15:10:14.366748    4072 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/multinode-323000/config.json: {Name:mk1b9bbf567cb445a84cb15e944d17a5783c446a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:10:14.367125    4072 start.go:360] acquireMachinesLock for multinode-323000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:10:14.367167    4072 start.go:364] duration metric: took 34.167µs to acquireMachinesLock for "multinode-323000"
	I0912 15:10:14.367181    4072 start.go:93] Provisioning new machine with config: &{Name:multinode-323000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-323000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:10:14.367220    4072 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:10:14.375479    4072 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 15:10:14.393219    4072 start.go:159] libmachine.API.Create for "multinode-323000" (driver="qemu2")
	I0912 15:10:14.393242    4072 client.go:168] LocalClient.Create starting
	I0912 15:10:14.393303    4072 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:10:14.393334    4072 main.go:141] libmachine: Decoding PEM data...
	I0912 15:10:14.393354    4072 main.go:141] libmachine: Parsing certificate...
	I0912 15:10:14.393387    4072 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:10:14.393416    4072 main.go:141] libmachine: Decoding PEM data...
	I0912 15:10:14.393422    4072 main.go:141] libmachine: Parsing certificate...
	I0912 15:10:14.393752    4072 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:10:14.553973    4072 main.go:141] libmachine: Creating SSH key...
	I0912 15:10:14.744912    4072 main.go:141] libmachine: Creating Disk image...
	I0912 15:10:14.744919    4072 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:10:14.745184    4072 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/disk.qcow2
	I0912 15:10:14.754929    4072 main.go:141] libmachine: STDOUT: 
	I0912 15:10:14.754949    4072 main.go:141] libmachine: STDERR: 
	I0912 15:10:14.755009    4072 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/disk.qcow2 +20000M
	I0912 15:10:14.762949    4072 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:10:14.762974    4072 main.go:141] libmachine: STDERR: 
	I0912 15:10:14.762985    4072 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/disk.qcow2
	I0912 15:10:14.762989    4072 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:10:14.763000    4072 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:10:14.763030    4072 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:8f:d6:6f:b2:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/disk.qcow2
	I0912 15:10:14.764736    4072 main.go:141] libmachine: STDOUT: 
	I0912 15:10:14.764751    4072 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:10:14.764769    4072 client.go:171] duration metric: took 371.533791ms to LocalClient.Create
	I0912 15:10:16.766899    4072 start.go:128] duration metric: took 2.399724542s to createHost
	I0912 15:10:16.766945    4072 start.go:83] releasing machines lock for "multinode-323000", held for 2.399833708s
	W0912 15:10:16.767006    4072 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:10:16.779280    4072 out.go:177] * Deleting "multinode-323000" in qemu2 ...
	W0912 15:10:16.815288    4072 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:10:16.815307    4072 start.go:729] Will try again in 5 seconds ...
	I0912 15:10:21.817419    4072 start.go:360] acquireMachinesLock for multinode-323000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:10:21.817853    4072 start.go:364] duration metric: took 311.792µs to acquireMachinesLock for "multinode-323000"
	I0912 15:10:21.817982    4072 start.go:93] Provisioning new machine with config: &{Name:multinode-323000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-323000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:10:21.818226    4072 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:10:21.829702    4072 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 15:10:21.880215    4072 start.go:159] libmachine.API.Create for "multinode-323000" (driver="qemu2")
	I0912 15:10:21.880266    4072 client.go:168] LocalClient.Create starting
	I0912 15:10:21.880378    4072 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:10:21.880439    4072 main.go:141] libmachine: Decoding PEM data...
	I0912 15:10:21.880458    4072 main.go:141] libmachine: Parsing certificate...
	I0912 15:10:21.880538    4072 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:10:21.880586    4072 main.go:141] libmachine: Decoding PEM data...
	I0912 15:10:21.880602    4072 main.go:141] libmachine: Parsing certificate...
	I0912 15:10:21.881309    4072 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:10:22.052543    4072 main.go:141] libmachine: Creating SSH key...
	I0912 15:10:22.120048    4072 main.go:141] libmachine: Creating Disk image...
	I0912 15:10:22.120054    4072 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:10:22.120295    4072 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/disk.qcow2
	I0912 15:10:22.129466    4072 main.go:141] libmachine: STDOUT: 
	I0912 15:10:22.129485    4072 main.go:141] libmachine: STDERR: 
	I0912 15:10:22.129542    4072 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/disk.qcow2 +20000M
	I0912 15:10:22.137281    4072 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:10:22.137297    4072 main.go:141] libmachine: STDERR: 
	I0912 15:10:22.137308    4072 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/disk.qcow2
	I0912 15:10:22.137312    4072 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:10:22.137322    4072 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:10:22.137348    4072 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:71:6a:de:60:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/disk.qcow2
	I0912 15:10:22.138965    4072 main.go:141] libmachine: STDOUT: 
	I0912 15:10:22.138984    4072 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:10:22.139002    4072 client.go:171] duration metric: took 258.737959ms to LocalClient.Create
	I0912 15:10:24.141132    4072 start.go:128] duration metric: took 2.322931667s to createHost
	I0912 15:10:24.141180    4072 start.go:83] releasing machines lock for "multinode-323000", held for 2.323364958s
	W0912 15:10:24.141590    4072 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-323000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-323000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:10:24.150711    4072 out.go:201] 
	W0912 15:10:24.160856    4072 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:10:24.160905    4072 out.go:270] * 
	* 
	W0912 15:10:24.163489    4072 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:10:24.175725    4072 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-323000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-323000 -n multinode-323000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-323000 -n multinode-323000: exit status 7 (68.578375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-323000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.01s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (104.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-323000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-323000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (131.5735ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-323000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-323000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-323000 -- rollout status deployment/busybox: exit status 1 (58.447375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-323000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-323000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-323000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.22825ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-323000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-323000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-323000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.665625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-323000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-323000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-323000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.35475ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-323000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-323000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-323000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.937667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-323000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-323000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-323000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.770958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-323000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-323000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-323000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.837958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-323000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-323000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-323000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.504583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-323000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-323000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-323000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.244125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-323000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-323000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-323000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.833875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-323000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-323000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-323000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.819625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-323000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0912 15:11:52.704539    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/functional-384000/client.crt: no such file or directory" logger="UnhandledError"
E0912 15:12:06.921291    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-323000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-323000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.764583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-323000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-323000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-323000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.156584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-323000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-323000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-323000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.900625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-323000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-323000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-323000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.281167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-323000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-323000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-323000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.029875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-323000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-323000 -n multinode-323000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-323000 -n multinode-323000: exit status 7 (29.37975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-323000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (104.44s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-323000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-323000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.553083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-323000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-323000 -n multinode-323000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-323000 -n multinode-323000: exit status 7 (29.262291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-323000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-323000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-323000 -v 3 --alsologtostderr: exit status 83 (42.244541ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-323000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-323000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:12:08.815934    4155 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:12:08.816082    4155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:12:08.816085    4155 out.go:358] Setting ErrFile to fd 2...
	I0912 15:12:08.816088    4155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:12:08.816209    4155 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:12:08.816463    4155 mustload.go:65] Loading cluster: multinode-323000
	I0912 15:12:08.816636    4155 config.go:182] Loaded profile config "multinode-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:12:08.821321    4155 out.go:177] * The control-plane node multinode-323000 host is not running: state=Stopped
	I0912 15:12:08.825238    4155 out.go:177]   To start a cluster, run: "minikube start -p multinode-323000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-323000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-323000 -n multinode-323000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-323000 -n multinode-323000: exit status 7 (29.225875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-323000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-323000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-323000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (28.569208ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-323000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-323000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-323000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-323000 -n multinode-323000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-323000 -n multinode-323000: exit status 7 (29.483666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-323000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-323000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-323000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-323000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-323000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-323000 -n multinode-323000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-323000 -n multinode-323000: exit status 7 (30.085458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-323000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-323000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-323000 status --output json --alsologtostderr: exit status 7 (29.636375ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-323000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:12:09.021521    4167 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:12:09.021661    4167 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:12:09.021664    4167 out.go:358] Setting ErrFile to fd 2...
	I0912 15:12:09.021667    4167 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:12:09.021785    4167 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:12:09.021900    4167 out.go:352] Setting JSON to true
	I0912 15:12:09.021915    4167 mustload.go:65] Loading cluster: multinode-323000
	I0912 15:12:09.021968    4167 notify.go:220] Checking for updates...
	I0912 15:12:09.022102    4167 config.go:182] Loaded profile config "multinode-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:12:09.022109    4167 status.go:255] checking status of multinode-323000 ...
	I0912 15:12:09.022325    4167 status.go:330] multinode-323000 host status = "Stopped" (err=<nil>)
	I0912 15:12:09.022329    4167 status.go:343] host is not running, skipping remaining checks
	I0912 15:12:09.022331    4167 status.go:257] multinode-323000 status: &{Name:multinode-323000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-323000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-323000 -n multinode-323000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-323000 -n multinode-323000: exit status 7 (29.394291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-323000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-323000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-323000 node stop m03: exit status 85 (45.931666ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-323000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-323000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-323000 status: exit status 7 (29.5075ms)

                                                
                                                
-- stdout --
	multinode-323000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-323000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-323000 status --alsologtostderr: exit status 7 (29.230125ms)

                                                
                                                
-- stdout --
	multinode-323000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:12:09.156294    4175 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:12:09.156414    4175 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:12:09.156418    4175 out.go:358] Setting ErrFile to fd 2...
	I0912 15:12:09.156420    4175 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:12:09.156545    4175 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:12:09.156681    4175 out.go:352] Setting JSON to false
	I0912 15:12:09.156693    4175 mustload.go:65] Loading cluster: multinode-323000
	I0912 15:12:09.156742    4175 notify.go:220] Checking for updates...
	I0912 15:12:09.156957    4175 config.go:182] Loaded profile config "multinode-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:12:09.156964    4175 status.go:255] checking status of multinode-323000 ...
	I0912 15:12:09.157178    4175 status.go:330] multinode-323000 host status = "Stopped" (err=<nil>)
	I0912 15:12:09.157182    4175 status.go:343] host is not running, skipping remaining checks
	I0912 15:12:09.157184    4175 status.go:257] multinode-323000 status: &{Name:multinode-323000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-323000 status --alsologtostderr": multinode-323000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-323000 -n multinode-323000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-323000 -n multinode-323000: exit status 7 (30.006416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-323000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (46.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-323000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-323000 node start m03 -v=7 --alsologtostderr: exit status 85 (44.839084ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:12:09.216193    4179 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:12:09.216468    4179 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:12:09.216471    4179 out.go:358] Setting ErrFile to fd 2...
	I0912 15:12:09.216473    4179 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:12:09.216613    4179 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:12:09.216856    4179 mustload.go:65] Loading cluster: multinode-323000
	I0912 15:12:09.217055    4179 config.go:182] Loaded profile config "multinode-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:12:09.221241    4179 out.go:201] 
	W0912 15:12:09.224266    4179 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0912 15:12:09.224271    4179 out.go:270] * 
	* 
	W0912 15:12:09.225885    4179 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:12:09.227387    4179 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0912 15:12:09.216193    4179 out.go:345] Setting OutFile to fd 1 ...
I0912 15:12:09.216468    4179 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 15:12:09.216471    4179 out.go:358] Setting ErrFile to fd 2...
I0912 15:12:09.216473    4179 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 15:12:09.216613    4179 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
I0912 15:12:09.216856    4179 mustload.go:65] Loading cluster: multinode-323000
I0912 15:12:09.217055    4179 config.go:182] Loaded profile config "multinode-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 15:12:09.221241    4179 out.go:201] 
W0912 15:12:09.224266    4179 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0912 15:12:09.224271    4179 out.go:270] * 
* 
W0912 15:12:09.225885    4179 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0912 15:12:09.227387    4179 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-323000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-323000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-323000 status -v=7 --alsologtostderr: exit status 7 (29.544542ms)

                                                
                                                
-- stdout --
	multinode-323000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:12:09.261236    4181 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:12:09.261360    4181 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:12:09.261364    4181 out.go:358] Setting ErrFile to fd 2...
	I0912 15:12:09.261366    4181 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:12:09.261483    4181 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:12:09.261613    4181 out.go:352] Setting JSON to false
	I0912 15:12:09.261624    4181 mustload.go:65] Loading cluster: multinode-323000
	I0912 15:12:09.261670    4181 notify.go:220] Checking for updates...
	I0912 15:12:09.261822    4181 config.go:182] Loaded profile config "multinode-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:12:09.261829    4181 status.go:255] checking status of multinode-323000 ...
	I0912 15:12:09.262026    4181 status.go:330] multinode-323000 host status = "Stopped" (err=<nil>)
	I0912 15:12:09.262029    4181 status.go:343] host is not running, skipping remaining checks
	I0912 15:12:09.262031    4181 status.go:257] multinode-323000 status: &{Name:multinode-323000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-323000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-323000 status -v=7 --alsologtostderr: exit status 7 (72.405ms)

                                                
                                                
-- stdout --
	multinode-323000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:12:10.303954    4183 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:12:10.304130    4183 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:12:10.304134    4183 out.go:358] Setting ErrFile to fd 2...
	I0912 15:12:10.304137    4183 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:12:10.304293    4183 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:12:10.304436    4183 out.go:352] Setting JSON to false
	I0912 15:12:10.304452    4183 mustload.go:65] Loading cluster: multinode-323000
	I0912 15:12:10.304500    4183 notify.go:220] Checking for updates...
	I0912 15:12:10.304720    4183 config.go:182] Loaded profile config "multinode-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:12:10.304730    4183 status.go:255] checking status of multinode-323000 ...
	I0912 15:12:10.305032    4183 status.go:330] multinode-323000 host status = "Stopped" (err=<nil>)
	I0912 15:12:10.305036    4183 status.go:343] host is not running, skipping remaining checks
	I0912 15:12:10.305039    4183 status.go:257] multinode-323000 status: &{Name:multinode-323000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-323000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-323000 status -v=7 --alsologtostderr: exit status 7 (74.106958ms)

                                                
                                                
-- stdout --
	multinode-323000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:12:12.202899    4185 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:12:12.203094    4185 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:12:12.203098    4185 out.go:358] Setting ErrFile to fd 2...
	I0912 15:12:12.203101    4185 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:12:12.203304    4185 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:12:12.203484    4185 out.go:352] Setting JSON to false
	I0912 15:12:12.203500    4185 mustload.go:65] Loading cluster: multinode-323000
	I0912 15:12:12.203544    4185 notify.go:220] Checking for updates...
	I0912 15:12:12.203757    4185 config.go:182] Loaded profile config "multinode-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:12:12.203766    4185 status.go:255] checking status of multinode-323000 ...
	I0912 15:12:12.204075    4185 status.go:330] multinode-323000 host status = "Stopped" (err=<nil>)
	I0912 15:12:12.204080    4185 status.go:343] host is not running, skipping remaining checks
	I0912 15:12:12.204083    4185 status.go:257] multinode-323000 status: &{Name:multinode-323000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-323000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-323000 status -v=7 --alsologtostderr: exit status 7 (70.330291ms)

                                                
                                                
-- stdout --
	multinode-323000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:12:14.825931    4190 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:12:14.826107    4190 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:12:14.826115    4190 out.go:358] Setting ErrFile to fd 2...
	I0912 15:12:14.826118    4190 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:12:14.826305    4190 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:12:14.826464    4190 out.go:352] Setting JSON to false
	I0912 15:12:14.826479    4190 mustload.go:65] Loading cluster: multinode-323000
	I0912 15:12:14.826512    4190 notify.go:220] Checking for updates...
	I0912 15:12:14.826774    4190 config.go:182] Loaded profile config "multinode-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:12:14.826782    4190 status.go:255] checking status of multinode-323000 ...
	I0912 15:12:14.827079    4190 status.go:330] multinode-323000 host status = "Stopped" (err=<nil>)
	I0912 15:12:14.827084    4190 status.go:343] host is not running, skipping remaining checks
	I0912 15:12:14.827087    4190 status.go:257] multinode-323000 status: &{Name:multinode-323000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-323000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-323000 status -v=7 --alsologtostderr: exit status 7 (75.349833ms)

                                                
                                                
-- stdout --
	multinode-323000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:12:18.691345    4192 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:12:18.691554    4192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:12:18.691558    4192 out.go:358] Setting ErrFile to fd 2...
	I0912 15:12:18.691561    4192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:12:18.691720    4192 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:12:18.691878    4192 out.go:352] Setting JSON to false
	I0912 15:12:18.691893    4192 mustload.go:65] Loading cluster: multinode-323000
	I0912 15:12:18.691936    4192 notify.go:220] Checking for updates...
	I0912 15:12:18.692139    4192 config.go:182] Loaded profile config "multinode-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:12:18.692147    4192 status.go:255] checking status of multinode-323000 ...
	I0912 15:12:18.692422    4192 status.go:330] multinode-323000 host status = "Stopped" (err=<nil>)
	I0912 15:12:18.692427    4192 status.go:343] host is not running, skipping remaining checks
	I0912 15:12:18.692430    4192 status.go:257] multinode-323000 status: &{Name:multinode-323000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-323000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-323000 status -v=7 --alsologtostderr: exit status 7 (73.856917ms)

                                                
                                                
-- stdout --
	multinode-323000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:12:25.842537    4194 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:12:25.842718    4194 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:12:25.842722    4194 out.go:358] Setting ErrFile to fd 2...
	I0912 15:12:25.842726    4194 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:12:25.842904    4194 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:12:25.843050    4194 out.go:352] Setting JSON to false
	I0912 15:12:25.843065    4194 mustload.go:65] Loading cluster: multinode-323000
	I0912 15:12:25.843098    4194 notify.go:220] Checking for updates...
	I0912 15:12:25.843328    4194 config.go:182] Loaded profile config "multinode-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:12:25.843337    4194 status.go:255] checking status of multinode-323000 ...
	I0912 15:12:25.843644    4194 status.go:330] multinode-323000 host status = "Stopped" (err=<nil>)
	I0912 15:12:25.843648    4194 status.go:343] host is not running, skipping remaining checks
	I0912 15:12:25.843651    4194 status.go:257] multinode-323000 status: &{Name:multinode-323000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-323000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-323000 status -v=7 --alsologtostderr: exit status 7 (71.810833ms)

                                                
                                                
-- stdout --
	multinode-323000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:12:32.371275    4196 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:12:32.371460    4196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:12:32.371464    4196 out.go:358] Setting ErrFile to fd 2...
	I0912 15:12:32.371468    4196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:12:32.371627    4196 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:12:32.371785    4196 out.go:352] Setting JSON to false
	I0912 15:12:32.371803    4196 mustload.go:65] Loading cluster: multinode-323000
	I0912 15:12:32.371834    4196 notify.go:220] Checking for updates...
	I0912 15:12:32.372083    4196 config.go:182] Loaded profile config "multinode-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:12:32.372091    4196 status.go:255] checking status of multinode-323000 ...
	I0912 15:12:32.372353    4196 status.go:330] multinode-323000 host status = "Stopped" (err=<nil>)
	I0912 15:12:32.372358    4196 status.go:343] host is not running, skipping remaining checks
	I0912 15:12:32.372361    4196 status.go:257] multinode-323000 status: &{Name:multinode-323000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-323000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-323000 status -v=7 --alsologtostderr: exit status 7 (73.732541ms)

                                                
                                                
-- stdout --
	multinode-323000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:12:38.333416    4198 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:12:38.333634    4198 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:12:38.333638    4198 out.go:358] Setting ErrFile to fd 2...
	I0912 15:12:38.333641    4198 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:12:38.333862    4198 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:12:38.334026    4198 out.go:352] Setting JSON to false
	I0912 15:12:38.334045    4198 mustload.go:65] Loading cluster: multinode-323000
	I0912 15:12:38.334077    4198 notify.go:220] Checking for updates...
	I0912 15:12:38.334315    4198 config.go:182] Loaded profile config "multinode-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:12:38.334325    4198 status.go:255] checking status of multinode-323000 ...
	I0912 15:12:38.334606    4198 status.go:330] multinode-323000 host status = "Stopped" (err=<nil>)
	I0912 15:12:38.334611    4198 status.go:343] host is not running, skipping remaining checks
	I0912 15:12:38.334614    4198 status.go:257] multinode-323000 status: &{Name:multinode-323000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-323000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-323000 status -v=7 --alsologtostderr: exit status 7 (72.366958ms)

                                                
                                                
-- stdout --
	multinode-323000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:12:55.321241    4206 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:12:55.321677    4206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:12:55.321683    4206 out.go:358] Setting ErrFile to fd 2...
	I0912 15:12:55.321687    4206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:12:55.321945    4206 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:12:55.322164    4206 out.go:352] Setting JSON to false
	I0912 15:12:55.322178    4206 mustload.go:65] Loading cluster: multinode-323000
	I0912 15:12:55.322325    4206 notify.go:220] Checking for updates...
	I0912 15:12:55.322747    4206 config.go:182] Loaded profile config "multinode-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:12:55.322759    4206 status.go:255] checking status of multinode-323000 ...
	I0912 15:12:55.323050    4206 status.go:330] multinode-323000 host status = "Stopped" (err=<nil>)
	I0912 15:12:55.323056    4206 status.go:343] host is not running, skipping remaining checks
	I0912 15:12:55.323059    4206 status.go:257] multinode-323000 status: &{Name:multinode-323000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-323000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-323000 -n multinode-323000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-323000 -n multinode-323000: exit status 7 (31.980541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-323000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (46.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (7.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-323000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-323000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-323000: (1.983478167s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-323000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-323000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.2261425s)

                                                
                                                
-- stdout --
	* [multinode-323000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-323000" primary control-plane node in "multinode-323000" cluster
	* Restarting existing qemu2 VM for "multinode-323000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-323000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:12:57.438148    4224 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:12:57.438297    4224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:12:57.438301    4224 out.go:358] Setting ErrFile to fd 2...
	I0912 15:12:57.438304    4224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:12:57.438483    4224 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:12:57.439737    4224 out.go:352] Setting JSON to false
	I0912 15:12:57.459827    4224 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4341,"bootTime":1726174836,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:12:57.459899    4224 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:12:57.464739    4224 out.go:177] * [multinode-323000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:12:57.471519    4224 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:12:57.471565    4224 notify.go:220] Checking for updates...
	I0912 15:12:57.478717    4224 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:12:57.480193    4224 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:12:57.483674    4224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:12:57.486657    4224 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:12:57.489699    4224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:12:57.493058    4224 config.go:182] Loaded profile config "multinode-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:12:57.493110    4224 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:12:57.497594    4224 out.go:177] * Using the qemu2 driver based on existing profile
	I0912 15:12:57.504680    4224 start.go:297] selected driver: qemu2
	I0912 15:12:57.504685    4224 start.go:901] validating driver "qemu2" against &{Name:multinode-323000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-323000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:12:57.504733    4224 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:12:57.507083    4224 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:12:57.507124    4224 cni.go:84] Creating CNI manager for ""
	I0912 15:12:57.507129    4224 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0912 15:12:57.507180    4224 start.go:340] cluster config:
	{Name:multinode-323000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-323000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:12:57.511174    4224 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:12:57.517596    4224 out.go:177] * Starting "multinode-323000" primary control-plane node in "multinode-323000" cluster
	I0912 15:12:57.521691    4224 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 15:12:57.521709    4224 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 15:12:57.521716    4224 cache.go:56] Caching tarball of preloaded images
	I0912 15:12:57.521772    4224 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:12:57.521778    4224 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 15:12:57.521827    4224 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/multinode-323000/config.json ...
	I0912 15:12:57.522327    4224 start.go:360] acquireMachinesLock for multinode-323000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:12:57.522362    4224 start.go:364] duration metric: took 28.875µs to acquireMachinesLock for "multinode-323000"
	I0912 15:12:57.522373    4224 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:12:57.522380    4224 fix.go:54] fixHost starting: 
	I0912 15:12:57.522501    4224 fix.go:112] recreateIfNeeded on multinode-323000: state=Stopped err=<nil>
	W0912 15:12:57.522509    4224 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 15:12:57.529646    4224 out.go:177] * Restarting existing qemu2 VM for "multinode-323000" ...
	I0912 15:12:57.533835    4224 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:12:57.533882    4224 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:71:6a:de:60:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/disk.qcow2
	I0912 15:12:57.536113    4224 main.go:141] libmachine: STDOUT: 
	I0912 15:12:57.536133    4224 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:12:57.536161    4224 fix.go:56] duration metric: took 13.782167ms for fixHost
	I0912 15:12:57.536167    4224 start.go:83] releasing machines lock for "multinode-323000", held for 13.800708ms
	W0912 15:12:57.536173    4224 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:12:57.536204    4224 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:12:57.536209    4224 start.go:729] Will try again in 5 seconds ...
	I0912 15:13:02.538308    4224 start.go:360] acquireMachinesLock for multinode-323000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:13:02.538765    4224 start.go:364] duration metric: took 342.583µs to acquireMachinesLock for "multinode-323000"
	I0912 15:13:02.538901    4224 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:13:02.538921    4224 fix.go:54] fixHost starting: 
	I0912 15:13:02.539680    4224 fix.go:112] recreateIfNeeded on multinode-323000: state=Stopped err=<nil>
	W0912 15:13:02.539707    4224 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 15:13:02.548106    4224 out.go:177] * Restarting existing qemu2 VM for "multinode-323000" ...
	I0912 15:13:02.552129    4224 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:13:02.552368    4224 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:71:6a:de:60:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/disk.qcow2
	I0912 15:13:02.562265    4224 main.go:141] libmachine: STDOUT: 
	I0912 15:13:02.562325    4224 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:13:02.562404    4224 fix.go:56] duration metric: took 23.48625ms for fixHost
	I0912 15:13:02.562421    4224 start.go:83] releasing machines lock for "multinode-323000", held for 23.63625ms
	W0912 15:13:02.562615    4224 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-323000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-323000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:13:02.570133    4224 out.go:201] 
	W0912 15:13:02.574096    4224 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:13:02.574144    4224 out.go:270] * 
	* 
	W0912 15:13:02.576916    4224 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:13:02.584101    4224 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-323000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-323000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-323000 -n multinode-323000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-323000 -n multinode-323000: exit status 7 (33.06475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-323000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (7.34s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-323000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-323000 node delete m03: exit status 83 (39.346375ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-323000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-323000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-323000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-323000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-323000 status --alsologtostderr: exit status 7 (29.987625ms)

                                                
                                                
-- stdout --
	multinode-323000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:13:02.769124    4238 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:13:02.769248    4238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:13:02.769252    4238 out.go:358] Setting ErrFile to fd 2...
	I0912 15:13:02.769254    4238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:13:02.769381    4238 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:13:02.769502    4238 out.go:352] Setting JSON to false
	I0912 15:13:02.769514    4238 mustload.go:65] Loading cluster: multinode-323000
	I0912 15:13:02.769561    4238 notify.go:220] Checking for updates...
	I0912 15:13:02.769707    4238 config.go:182] Loaded profile config "multinode-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:13:02.769717    4238 status.go:255] checking status of multinode-323000 ...
	I0912 15:13:02.769911    4238 status.go:330] multinode-323000 host status = "Stopped" (err=<nil>)
	I0912 15:13:02.769914    4238 status.go:343] host is not running, skipping remaining checks
	I0912 15:13:02.769916    4238 status.go:257] multinode-323000 status: &{Name:multinode-323000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-323000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-323000 -n multinode-323000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-323000 -n multinode-323000: exit status 7 (29.013ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-323000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (1.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-323000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-323000 stop: (1.833784917s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-323000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-323000 status: exit status 7 (65.095125ms)

                                                
                                                
-- stdout --
	multinode-323000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-323000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-323000 status --alsologtostderr: exit status 7 (32.365125ms)

                                                
                                                
-- stdout --
	multinode-323000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:13:04.729482    4254 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:13:04.729747    4254 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:13:04.729753    4254 out.go:358] Setting ErrFile to fd 2...
	I0912 15:13:04.729756    4254 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:13:04.729917    4254 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:13:04.730063    4254 out.go:352] Setting JSON to false
	I0912 15:13:04.730076    4254 mustload.go:65] Loading cluster: multinode-323000
	I0912 15:13:04.730115    4254 notify.go:220] Checking for updates...
	I0912 15:13:04.730478    4254 config.go:182] Loaded profile config "multinode-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:13:04.730493    4254 status.go:255] checking status of multinode-323000 ...
	I0912 15:13:04.730705    4254 status.go:330] multinode-323000 host status = "Stopped" (err=<nil>)
	I0912 15:13:04.730709    4254 status.go:343] host is not running, skipping remaining checks
	I0912 15:13:04.730711    4254 status.go:257] multinode-323000 status: &{Name:multinode-323000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-323000 status --alsologtostderr": multinode-323000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-323000 status --alsologtostderr": multinode-323000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-323000 -n multinode-323000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-323000 -n multinode-323000: exit status 7 (30.180708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-323000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (1.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-323000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-323000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.180097416s)

                                                
                                                
-- stdout --
	* [multinode-323000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-323000" primary control-plane node in "multinode-323000" cluster
	* Restarting existing qemu2 VM for "multinode-323000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-323000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:13:04.790649    4258 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:13:04.790765    4258 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:13:04.790768    4258 out.go:358] Setting ErrFile to fd 2...
	I0912 15:13:04.790771    4258 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:13:04.790903    4258 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:13:04.791962    4258 out.go:352] Setting JSON to false
	I0912 15:13:04.807916    4258 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4348,"bootTime":1726174836,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:13:04.807991    4258 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:13:04.810837    4258 out.go:177] * [multinode-323000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:13:04.818051    4258 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:13:04.818078    4258 notify.go:220] Checking for updates...
	I0912 15:13:04.824003    4258 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:13:04.827036    4258 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:13:04.828366    4258 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:13:04.830960    4258 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:13:04.833993    4258 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:13:04.837332    4258 config.go:182] Loaded profile config "multinode-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:13:04.837595    4258 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:13:04.841927    4258 out.go:177] * Using the qemu2 driver based on existing profile
	I0912 15:13:04.848993    4258 start.go:297] selected driver: qemu2
	I0912 15:13:04.849000    4258 start.go:901] validating driver "qemu2" against &{Name:multinode-323000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-323000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:13:04.849066    4258 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:13:04.851258    4258 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:13:04.851303    4258 cni.go:84] Creating CNI manager for ""
	I0912 15:13:04.851310    4258 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0912 15:13:04.851349    4258 start.go:340] cluster config:
	{Name:multinode-323000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-323000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:13:04.854742    4258 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:13:04.862012    4258 out.go:177] * Starting "multinode-323000" primary control-plane node in "multinode-323000" cluster
	I0912 15:13:04.865916    4258 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 15:13:04.865934    4258 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 15:13:04.865948    4258 cache.go:56] Caching tarball of preloaded images
	I0912 15:13:04.866001    4258 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:13:04.866010    4258 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 15:13:04.866080    4258 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/multinode-323000/config.json ...
	I0912 15:13:04.866593    4258 start.go:360] acquireMachinesLock for multinode-323000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:13:04.866620    4258 start.go:364] duration metric: took 21.75µs to acquireMachinesLock for "multinode-323000"
	I0912 15:13:04.866630    4258 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:13:04.866635    4258 fix.go:54] fixHost starting: 
	I0912 15:13:04.866754    4258 fix.go:112] recreateIfNeeded on multinode-323000: state=Stopped err=<nil>
	W0912 15:13:04.866765    4258 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 15:13:04.873953    4258 out.go:177] * Restarting existing qemu2 VM for "multinode-323000" ...
	I0912 15:13:04.877986    4258 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:13:04.878030    4258 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:71:6a:de:60:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/disk.qcow2
	I0912 15:13:04.880020    4258 main.go:141] libmachine: STDOUT: 
	I0912 15:13:04.880041    4258 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:13:04.880068    4258 fix.go:56] duration metric: took 13.433083ms for fixHost
	I0912 15:13:04.880072    4258 start.go:83] releasing machines lock for "multinode-323000", held for 13.447875ms
	W0912 15:13:04.880078    4258 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:13:04.880107    4258 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:13:04.880112    4258 start.go:729] Will try again in 5 seconds ...
	I0912 15:13:09.882148    4258 start.go:360] acquireMachinesLock for multinode-323000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:13:09.882670    4258 start.go:364] duration metric: took 368.958µs to acquireMachinesLock for "multinode-323000"
	I0912 15:13:09.882773    4258 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:13:09.882789    4258 fix.go:54] fixHost starting: 
	I0912 15:13:09.883412    4258 fix.go:112] recreateIfNeeded on multinode-323000: state=Stopped err=<nil>
	W0912 15:13:09.883438    4258 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 15:13:09.888886    4258 out.go:177] * Restarting existing qemu2 VM for "multinode-323000" ...
	I0912 15:13:09.898904    4258 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:13:09.899222    4258 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:71:6a:de:60:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/multinode-323000/disk.qcow2
	I0912 15:13:09.908597    4258 main.go:141] libmachine: STDOUT: 
	I0912 15:13:09.908706    4258 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:13:09.908780    4258 fix.go:56] duration metric: took 25.990416ms for fixHost
	I0912 15:13:09.908798    4258 start.go:83] releasing machines lock for "multinode-323000", held for 26.108542ms
	W0912 15:13:09.908970    4258 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-323000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-323000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:13:09.916778    4258 out.go:201] 
	W0912 15:13:09.918604    4258 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:13:09.918634    4258 out.go:270] * 
	* 
	W0912 15:13:09.921599    4258 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:13:09.929737    4258 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-323000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-323000 -n multinode-323000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-323000 -n multinode-323000: exit status 7 (65.845042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-323000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-323000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-323000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-323000-m01 --driver=qemu2 : exit status 80 (10.057470333s)

                                                
                                                
-- stdout --
	* [multinode-323000-m01] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-323000-m01" primary control-plane node in "multinode-323000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-323000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-323000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-323000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-323000-m02 --driver=qemu2 : exit status 80 (9.975507792s)

                                                
                                                
-- stdout --
	* [multinode-323000-m02] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-323000-m02" primary control-plane node in "multinode-323000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-323000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-323000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-323000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-323000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-323000: exit status 83 (81.213042ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-323000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-323000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-323000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-323000 -n multinode-323000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-323000 -n multinode-323000: exit status 7 (30.000708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-323000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.26s)

                                                
                                    
x
+
TestPreload (10.03s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-336000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-336000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.879542958s)

                                                
                                                
-- stdout --
	* [test-preload-336000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-336000" primary control-plane node in "test-preload-336000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-336000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:13:30.412714    4313 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:13:30.412854    4313 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:13:30.412857    4313 out.go:358] Setting ErrFile to fd 2...
	I0912 15:13:30.412859    4313 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:13:30.412995    4313 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:13:30.414046    4313 out.go:352] Setting JSON to false
	I0912 15:13:30.430958    4313 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4374,"bootTime":1726174836,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:13:30.431031    4313 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:13:30.438417    4313 out.go:177] * [test-preload-336000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:13:30.446412    4313 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:13:30.446464    4313 notify.go:220] Checking for updates...
	I0912 15:13:30.453297    4313 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:13:30.456231    4313 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:13:30.459319    4313 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:13:30.460862    4313 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:13:30.464344    4313 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:13:30.467695    4313 config.go:182] Loaded profile config "multinode-323000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:13:30.467752    4313 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:13:30.472104    4313 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:13:30.479339    4313 start.go:297] selected driver: qemu2
	I0912 15:13:30.479346    4313 start.go:901] validating driver "qemu2" against <nil>
	I0912 15:13:30.479353    4313 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:13:30.481711    4313 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 15:13:30.484303    4313 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:13:30.487375    4313 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:13:30.487414    4313 cni.go:84] Creating CNI manager for ""
	I0912 15:13:30.487422    4313 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:13:30.487428    4313 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 15:13:30.487470    4313 start.go:340] cluster config:
	{Name:test-preload-336000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:13:30.491490    4313 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:13:30.498270    4313 out.go:177] * Starting "test-preload-336000" primary control-plane node in "test-preload-336000" cluster
	I0912 15:13:30.502278    4313 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0912 15:13:30.502371    4313 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/test-preload-336000/config.json ...
	I0912 15:13:30.502390    4313 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/test-preload-336000/config.json: {Name:mkc75a726c803e38081f3054a58772209accd141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:13:30.502382    4313 cache.go:107] acquiring lock: {Name:mk849aad3d81f3b11675bc719648a3ec0114ab1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:13:30.502385    4313 cache.go:107] acquiring lock: {Name:mk29d6d5ccd3f3e82f6e1c388e7cb20e3b1d7db5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:13:30.502409    4313 cache.go:107] acquiring lock: {Name:mk25939cfb24945736d3d7268fbd88dd49b36bb3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:13:30.502382    4313 cache.go:107] acquiring lock: {Name:mkb2a64d3e3719cf8754386c1b8c2a886238e6a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:13:30.502505    4313 cache.go:107] acquiring lock: {Name:mk9bc6e151fd132e4bbc25d20b5bbe2ea786552f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:13:30.502510    4313 cache.go:107] acquiring lock: {Name:mkd8279431e00b7f1bded8d65f87c6d3817bde30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:13:30.502537    4313 cache.go:107] acquiring lock: {Name:mk0934ee2a41b4dc0daaf2c2f00bca0ef7f697aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:13:30.503002    4313 start.go:360] acquireMachinesLock for test-preload-336000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:13:30.503040    4313 start.go:364] duration metric: took 29µs to acquireMachinesLock for "test-preload-336000"
	I0912 15:13:30.503055    4313 start.go:93] Provisioning new machine with config: &{Name:test-preload-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:13:30.503099    4313 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:13:30.503570    4313 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0912 15:13:30.503576    4313 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0912 15:13:30.503581    4313 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0912 15:13:30.503570    4313 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0912 15:13:30.503599    4313 cache.go:107] acquiring lock: {Name:mkfcbf8a273c02a617276af3b1784ca9c7d27c2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:13:30.503622    4313 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0912 15:13:30.503647    4313 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0912 15:13:30.503736    4313 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 15:13:30.506281    4313 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 15:13:30.506808    4313 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0912 15:13:30.513092    4313 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 15:13:30.513122    4313 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0912 15:13:30.513189    4313 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0912 15:13:30.513218    4313 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0912 15:13:30.513226    4313 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0912 15:13:30.513426    4313 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0912 15:13:30.513424    4313 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0912 15:13:30.513653    4313 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0912 15:13:30.524785    4313 start.go:159] libmachine.API.Create for "test-preload-336000" (driver="qemu2")
	I0912 15:13:30.524804    4313 client.go:168] LocalClient.Create starting
	I0912 15:13:30.524891    4313 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:13:30.524924    4313 main.go:141] libmachine: Decoding PEM data...
	I0912 15:13:30.524932    4313 main.go:141] libmachine: Parsing certificate...
	I0912 15:13:30.524969    4313 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:13:30.524996    4313 main.go:141] libmachine: Decoding PEM data...
	I0912 15:13:30.525006    4313 main.go:141] libmachine: Parsing certificate...
	I0912 15:13:30.525355    4313 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:13:30.688462    4313 main.go:141] libmachine: Creating SSH key...
	I0912 15:13:30.860047    4313 main.go:141] libmachine: Creating Disk image...
	I0912 15:13:30.860078    4313 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:13:30.860339    4313 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/test-preload-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/test-preload-336000/disk.qcow2
	I0912 15:13:30.870702    4313 main.go:141] libmachine: STDOUT: 
	I0912 15:13:30.870729    4313 main.go:141] libmachine: STDERR: 
	I0912 15:13:30.870786    4313 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/test-preload-336000/disk.qcow2 +20000M
	I0912 15:13:30.879649    4313 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:13:30.879671    4313 main.go:141] libmachine: STDERR: 
	I0912 15:13:30.879695    4313 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/test-preload-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/test-preload-336000/disk.qcow2
	I0912 15:13:30.879700    4313 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:13:30.879711    4313 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:13:30.879737    4313 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/test-preload-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/test-preload-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/test-preload-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:5c:ad:71:dc:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/test-preload-336000/disk.qcow2
	I0912 15:13:30.882100    4313 main.go:141] libmachine: STDOUT: 
	I0912 15:13:30.882121    4313 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:13:30.882141    4313 client.go:171] duration metric: took 357.343083ms to LocalClient.Create
	I0912 15:13:31.072224    4313 cache.go:162] opening:  /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0912 15:13:31.079411    4313 cache.go:162] opening:  /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0912 15:13:31.079569    4313 cache.go:162] opening:  /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0912 15:13:31.083035    4313 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0912 15:13:31.083077    4313 cache.go:162] opening:  /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0912 15:13:31.104115    4313 cache.go:162] opening:  /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0912 15:13:31.109972    4313 cache.go:162] opening:  /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0912 15:13:31.116079    4313 cache.go:162] opening:  /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0912 15:13:31.270899    4313 cache.go:157] /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0912 15:13:31.270941    4313 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 768.475541ms
	I0912 15:13:31.270984    4313 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0912 15:13:31.409502    4313 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0912 15:13:31.409578    4313 cache.go:162] opening:  /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0912 15:13:31.870177    4313 cache.go:157] /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0912 15:13:31.870225    4313 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.367880208s
	I0912 15:13:31.870273    4313 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0912 15:13:32.383942    4313 cache.go:157] /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0912 15:13:32.383998    4313 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 1.880446209s
	I0912 15:13:32.384022    4313 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0912 15:13:32.882356    4313 start.go:128] duration metric: took 2.379296917s to createHost
	I0912 15:13:32.882415    4313 start.go:83] releasing machines lock for "test-preload-336000", held for 2.379430417s
	W0912 15:13:32.882471    4313 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:13:32.900903    4313 out.go:177] * Deleting "test-preload-336000" in qemu2 ...
	W0912 15:13:32.941593    4313 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:13:32.941623    4313 start.go:729] Will try again in 5 seconds ...
	I0912 15:13:33.929247    4313 cache.go:157] /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0912 15:13:33.929293    4313 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.427004958s
	I0912 15:13:33.929320    4313 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0912 15:13:35.130333    4313 cache.go:157] /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0912 15:13:35.130411    4313 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.628025417s
	I0912 15:13:35.130440    4313 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0912 15:13:35.733379    4313 cache.go:157] /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0912 15:13:35.733454    4313 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.231214167s
	I0912 15:13:35.733481    4313 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0912 15:13:37.606694    4313 cache.go:157] /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0912 15:13:37.606747    4313 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 7.104530792s
	I0912 15:13:37.606771    4313 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0912 15:13:37.941732    4313 start.go:360] acquireMachinesLock for test-preload-336000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:13:37.942170    4313 start.go:364] duration metric: took 362.208µs to acquireMachinesLock for "test-preload-336000"
	I0912 15:13:37.942308    4313 start.go:93] Provisioning new machine with config: &{Name:test-preload-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:13:37.942531    4313 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:13:37.955039    4313 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 15:13:38.007400    4313 start.go:159] libmachine.API.Create for "test-preload-336000" (driver="qemu2")
	I0912 15:13:38.007459    4313 client.go:168] LocalClient.Create starting
	I0912 15:13:38.007606    4313 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:13:38.007678    4313 main.go:141] libmachine: Decoding PEM data...
	I0912 15:13:38.007709    4313 main.go:141] libmachine: Parsing certificate...
	I0912 15:13:38.007786    4313 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:13:38.007831    4313 main.go:141] libmachine: Decoding PEM data...
	I0912 15:13:38.007846    4313 main.go:141] libmachine: Parsing certificate...
	I0912 15:13:38.008363    4313 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:13:38.178587    4313 main.go:141] libmachine: Creating SSH key...
	I0912 15:13:38.204763    4313 main.go:141] libmachine: Creating Disk image...
	I0912 15:13:38.204768    4313 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:13:38.205012    4313 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/test-preload-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/test-preload-336000/disk.qcow2
	I0912 15:13:38.214314    4313 main.go:141] libmachine: STDOUT: 
	I0912 15:13:38.214328    4313 main.go:141] libmachine: STDERR: 
	I0912 15:13:38.214370    4313 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/test-preload-336000/disk.qcow2 +20000M
	I0912 15:13:38.222365    4313 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:13:38.222380    4313 main.go:141] libmachine: STDERR: 
	I0912 15:13:38.222392    4313 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/test-preload-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/test-preload-336000/disk.qcow2
	I0912 15:13:38.222396    4313 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:13:38.222408    4313 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:13:38.222445    4313 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/test-preload-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/test-preload-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/test-preload-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:73:d7:2b:71:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/test-preload-336000/disk.qcow2
	I0912 15:13:38.224112    4313 main.go:141] libmachine: STDOUT: 
	I0912 15:13:38.224127    4313 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:13:38.224141    4313 client.go:171] duration metric: took 216.683667ms to LocalClient.Create
	I0912 15:13:39.605417    4313 cache.go:157] /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0912 15:13:39.605512    4313 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.103261209s
	I0912 15:13:39.605544    4313 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0912 15:13:39.605597    4313 cache.go:87] Successfully saved all images to host disk.
	I0912 15:13:40.226316    4313 start.go:128] duration metric: took 2.283811416s to createHost
	I0912 15:13:40.226392    4313 start.go:83] releasing machines lock for "test-preload-336000", held for 2.284260291s
	W0912 15:13:40.226762    4313 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:13:40.235085    4313 out.go:201] 
	W0912 15:13:40.240191    4313 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:13:40.240232    4313 out.go:270] * 
	* 
	W0912 15:13:40.242665    4313 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:13:40.249120    4313 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-336000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-09-12 15:13:40.266992 -0700 PDT m=+2740.727384834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-336000 -n test-preload-336000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-336000 -n test-preload-336000: exit status 7 (67.224458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-336000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-336000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-336000
--- FAIL: TestPreload (10.03s)

                                                
                                    
x
+
TestScheduledStopUnix (10.04s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-313000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-313000 --memory=2048 --driver=qemu2 : exit status 80 (9.887533875s)

                                                
                                                
-- stdout --
	* [scheduled-stop-313000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-313000" primary control-plane node in "scheduled-stop-313000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-313000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-313000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-313000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-313000" primary control-plane node in "scheduled-stop-313000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-313000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-313000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-09-12 15:13:50.302341 -0700 PDT m=+2750.763015417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-313000 -n scheduled-stop-313000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-313000 -n scheduled-stop-313000: exit status 7 (67.716959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-313000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-313000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-313000
--- FAIL: TestScheduledStopUnix (10.04s)

                                                
                                    
x
+
TestSkaffold (12.49s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3601049052 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3601049052 version: (1.067782166s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-170000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-170000 --memory=2600 --driver=qemu2 : exit status 80 (9.8738205s)

                                                
                                                
-- stdout --
	* [skaffold-170000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-170000" primary control-plane node in "skaffold-170000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-170000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-170000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-170000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-170000" primary control-plane node in "skaffold-170000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-170000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-170000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-09-12 15:14:02.793957 -0700 PDT m=+2763.254981084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-170000 -n skaffold-170000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-170000 -n skaffold-170000: exit status 7 (62.215083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-170000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-170000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-170000
--- FAIL: TestSkaffold (12.49s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (608.78s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2015370779 start -p running-upgrade-871000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2015370779 start -p running-upgrade-871000 --memory=2200 --vm-driver=qemu2 : (1m0.035160541s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-871000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0912 15:16:52.696016    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/functional-384000/client.crt: no such file or directory" logger="UnhandledError"
E0912 15:17:06.912851    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-871000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m34.269092125s)

                                                
                                                
-- stdout --
	* [running-upgrade-871000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-871000" primary control-plane node in "running-upgrade-871000" cluster
	* Updating the running qemu2 "running-upgrade-871000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:15:45.183286    4705 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:15:45.183437    4705 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:15:45.183445    4705 out.go:358] Setting ErrFile to fd 2...
	I0912 15:15:45.183447    4705 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:15:45.183574    4705 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:15:45.184687    4705 out.go:352] Setting JSON to false
	I0912 15:15:45.201189    4705 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4509,"bootTime":1726174836,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:15:45.201275    4705 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:15:45.206499    4705 out.go:177] * [running-upgrade-871000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:15:45.213474    4705 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:15:45.213492    4705 notify.go:220] Checking for updates...
	I0912 15:15:45.222401    4705 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:15:45.226380    4705 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:15:45.229391    4705 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:15:45.232348    4705 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:15:45.235367    4705 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:15:45.237009    4705 config.go:182] Loaded profile config "running-upgrade-871000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0912 15:15:45.240362    4705 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0912 15:15:45.243519    4705 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:15:45.248259    4705 out.go:177] * Using the qemu2 driver based on existing profile
	I0912 15:15:45.255447    4705 start.go:297] selected driver: qemu2
	I0912 15:15:45.255454    4705 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50288 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0912 15:15:45.255517    4705 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:15:45.257744    4705 cni.go:84] Creating CNI manager for ""
	I0912 15:15:45.257763    4705 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:15:45.257791    4705 start.go:340] cluster config:
	{Name:running-upgrade-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50288 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0912 15:15:45.257844    4705 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:15:45.265354    4705 out.go:177] * Starting "running-upgrade-871000" primary control-plane node in "running-upgrade-871000" cluster
	I0912 15:15:45.269430    4705 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0912 15:15:45.269447    4705 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0912 15:15:45.269461    4705 cache.go:56] Caching tarball of preloaded images
	I0912 15:15:45.269530    4705 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:15:45.269536    4705 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0912 15:15:45.269593    4705 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/running-upgrade-871000/config.json ...
	I0912 15:15:45.270075    4705 start.go:360] acquireMachinesLock for running-upgrade-871000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:15:45.270111    4705 start.go:364] duration metric: took 30.791µs to acquireMachinesLock for "running-upgrade-871000"
	I0912 15:15:45.270120    4705 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:15:45.270125    4705 fix.go:54] fixHost starting: 
	I0912 15:15:45.270740    4705 fix.go:112] recreateIfNeeded on running-upgrade-871000: state=Running err=<nil>
	W0912 15:15:45.270749    4705 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 15:15:45.274404    4705 out.go:177] * Updating the running qemu2 "running-upgrade-871000" VM ...
	I0912 15:15:45.282386    4705 machine.go:93] provisionDockerMachine start ...
	I0912 15:15:45.282444    4705 main.go:141] libmachine: Using SSH client type: native
	I0912 15:15:45.282568    4705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ca7ba0] 0x102caa400 <nil>  [] 0s} localhost 50256 <nil> <nil>}
	I0912 15:15:45.282573    4705 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 15:15:45.349687    4705 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-871000
	
	I0912 15:15:45.349698    4705 buildroot.go:166] provisioning hostname "running-upgrade-871000"
	I0912 15:15:45.349733    4705 main.go:141] libmachine: Using SSH client type: native
	I0912 15:15:45.349828    4705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ca7ba0] 0x102caa400 <nil>  [] 0s} localhost 50256 <nil> <nil>}
	I0912 15:15:45.349833    4705 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-871000 && echo "running-upgrade-871000" | sudo tee /etc/hostname
	I0912 15:15:45.422306    4705 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-871000
	
	I0912 15:15:45.422354    4705 main.go:141] libmachine: Using SSH client type: native
	I0912 15:15:45.422467    4705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ca7ba0] 0x102caa400 <nil>  [] 0s} localhost 50256 <nil> <nil>}
	I0912 15:15:45.422475    4705 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-871000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-871000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-871000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 15:15:45.491467    4705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 15:15:45.491479    4705 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19616-1259/.minikube CaCertPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19616-1259/.minikube}
	I0912 15:15:45.491486    4705 buildroot.go:174] setting up certificates
	I0912 15:15:45.491490    4705 provision.go:84] configureAuth start
	I0912 15:15:45.491497    4705 provision.go:143] copyHostCerts
	I0912 15:15:45.491556    4705 exec_runner.go:144] found /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.pem, removing ...
	I0912 15:15:45.491562    4705 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.pem
	I0912 15:15:45.491694    4705 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.pem (1078 bytes)
	I0912 15:15:45.491875    4705 exec_runner.go:144] found /Users/jenkins/minikube-integration/19616-1259/.minikube/cert.pem, removing ...
	I0912 15:15:45.491878    4705 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19616-1259/.minikube/cert.pem
	I0912 15:15:45.491932    4705 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19616-1259/.minikube/cert.pem (1123 bytes)
	I0912 15:15:45.492022    4705 exec_runner.go:144] found /Users/jenkins/minikube-integration/19616-1259/.minikube/key.pem, removing ...
	I0912 15:15:45.492025    4705 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19616-1259/.minikube/key.pem
	I0912 15:15:45.492071    4705 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19616-1259/.minikube/key.pem (1675 bytes)
	I0912 15:15:45.492152    4705 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-871000 san=[127.0.0.1 localhost minikube running-upgrade-871000]
	I0912 15:15:45.636177    4705 provision.go:177] copyRemoteCerts
	I0912 15:15:45.636224    4705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 15:15:45.636234    4705 sshutil.go:53] new ssh client: &{IP:localhost Port:50256 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/running-upgrade-871000/id_rsa Username:docker}
	I0912 15:15:45.673490    4705 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0912 15:15:45.680163    4705 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0912 15:15:45.687625    4705 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 15:15:45.695270    4705 provision.go:87] duration metric: took 203.776833ms to configureAuth
	I0912 15:15:45.695280    4705 buildroot.go:189] setting minikube options for container-runtime
	I0912 15:15:45.695379    4705 config.go:182] Loaded profile config "running-upgrade-871000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0912 15:15:45.695418    4705 main.go:141] libmachine: Using SSH client type: native
	I0912 15:15:45.695519    4705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ca7ba0] 0x102caa400 <nil>  [] 0s} localhost 50256 <nil> <nil>}
	I0912 15:15:45.695524    4705 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0912 15:15:45.765452    4705 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0912 15:15:45.765462    4705 buildroot.go:70] root file system type: tmpfs
	I0912 15:15:45.765515    4705 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0912 15:15:45.765565    4705 main.go:141] libmachine: Using SSH client type: native
	I0912 15:15:45.765685    4705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ca7ba0] 0x102caa400 <nil>  [] 0s} localhost 50256 <nil> <nil>}
	I0912 15:15:45.765717    4705 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0912 15:15:45.838076    4705 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0912 15:15:45.838124    4705 main.go:141] libmachine: Using SSH client type: native
	I0912 15:15:45.838227    4705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ca7ba0] 0x102caa400 <nil>  [] 0s} localhost 50256 <nil> <nil>}
	I0912 15:15:45.838235    4705 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0912 15:15:45.906374    4705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 15:15:45.906384    4705 machine.go:96] duration metric: took 624.008875ms to provisionDockerMachine
	I0912 15:15:45.906389    4705 start.go:293] postStartSetup for "running-upgrade-871000" (driver="qemu2")
	I0912 15:15:45.906396    4705 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 15:15:45.906442    4705 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 15:15:45.906453    4705 sshutil.go:53] new ssh client: &{IP:localhost Port:50256 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/running-upgrade-871000/id_rsa Username:docker}
	I0912 15:15:45.943070    4705 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 15:15:45.944313    4705 info.go:137] Remote host: Buildroot 2021.02.12
	I0912 15:15:45.944320    4705 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19616-1259/.minikube/addons for local assets ...
	I0912 15:15:45.944393    4705 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19616-1259/.minikube/files for local assets ...
	I0912 15:15:45.944504    4705 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19616-1259/.minikube/files/etc/ssl/certs/17842.pem -> 17842.pem in /etc/ssl/certs
	I0912 15:15:45.944622    4705 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 15:15:45.947662    4705 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/files/etc/ssl/certs/17842.pem --> /etc/ssl/certs/17842.pem (1708 bytes)
	I0912 15:15:45.954905    4705 start.go:296] duration metric: took 48.5105ms for postStartSetup
	I0912 15:15:45.954919    4705 fix.go:56] duration metric: took 684.814833ms for fixHost
	I0912 15:15:45.954961    4705 main.go:141] libmachine: Using SSH client type: native
	I0912 15:15:45.955075    4705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ca7ba0] 0x102caa400 <nil>  [] 0s} localhost 50256 <nil> <nil>}
	I0912 15:15:45.955083    4705 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 15:15:46.024250    4705 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726179345.837096096
	
	I0912 15:15:46.024257    4705 fix.go:216] guest clock: 1726179345.837096096
	I0912 15:15:46.024261    4705 fix.go:229] Guest: 2024-09-12 15:15:45.837096096 -0700 PDT Remote: 2024-09-12 15:15:45.954921 -0700 PDT m=+0.791568084 (delta=-117.824904ms)
	I0912 15:15:46.024272    4705 fix.go:200] guest clock delta is within tolerance: -117.824904ms
	I0912 15:15:46.024275    4705 start.go:83] releasing machines lock for "running-upgrade-871000", held for 754.180625ms
	I0912 15:15:46.024332    4705 ssh_runner.go:195] Run: cat /version.json
	I0912 15:15:46.024342    4705 sshutil.go:53] new ssh client: &{IP:localhost Port:50256 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/running-upgrade-871000/id_rsa Username:docker}
	I0912 15:15:46.024332    4705 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 15:15:46.024382    4705 sshutil.go:53] new ssh client: &{IP:localhost Port:50256 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/running-upgrade-871000/id_rsa Username:docker}
	W0912 15:15:46.024930    4705 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50256: connect: connection refused
	I0912 15:15:46.024955    4705 retry.go:31] will retry after 327.366038ms: dial tcp [::1]:50256: connect: connection refused
	W0912 15:15:46.396378    4705 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0912 15:15:46.396541    4705 ssh_runner.go:195] Run: systemctl --version
	I0912 15:15:46.400423    4705 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 15:15:46.402983    4705 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 15:15:46.403023    4705 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0912 15:15:46.407393    4705 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0912 15:15:46.413303    4705 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 15:15:46.413311    4705 start.go:495] detecting cgroup driver to use...
	I0912 15:15:46.413377    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 15:15:46.419417    4705 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0912 15:15:46.423022    4705 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0912 15:15:46.426690    4705 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0912 15:15:46.426718    4705 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0912 15:15:46.429785    4705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 15:15:46.432679    4705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0912 15:15:46.435824    4705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 15:15:46.438982    4705 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 15:15:46.442062    4705 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0912 15:15:46.444781    4705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0912 15:15:46.447926    4705 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0912 15:15:46.451425    4705 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 15:15:46.454606    4705 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 15:15:46.457276    4705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 15:15:46.539303    4705 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0912 15:15:46.550360    4705 start.go:495] detecting cgroup driver to use...
	I0912 15:15:46.550426    4705 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0912 15:15:46.559187    4705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 15:15:46.564171    4705 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 15:15:46.570162    4705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 15:15:46.574722    4705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 15:15:46.579169    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 15:15:46.584577    4705 ssh_runner.go:195] Run: which cri-dockerd
	I0912 15:15:46.585796    4705 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0912 15:15:46.588535    4705 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0912 15:15:46.593251    4705 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0912 15:15:46.685053    4705 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0912 15:15:46.776906    4705 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0912 15:15:46.776962    4705 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0912 15:15:46.782216    4705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 15:15:46.869727    4705 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0912 15:16:00.082862    4705 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.213489166s)
	I0912 15:16:00.082940    4705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0912 15:16:00.087870    4705 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0912 15:16:00.095002    4705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0912 15:16:00.101446    4705 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0912 15:16:00.173928    4705 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0912 15:16:00.249109    4705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 15:16:00.315503    4705 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0912 15:16:00.321724    4705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0912 15:16:00.326887    4705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 15:16:00.382994    4705 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0912 15:16:00.424855    4705 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0912 15:16:00.424925    4705 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0912 15:16:00.427180    4705 start.go:563] Will wait 60s for crictl version
	I0912 15:16:00.427230    4705 ssh_runner.go:195] Run: which crictl
	I0912 15:16:00.428689    4705 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 15:16:00.440185    4705 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0912 15:16:00.440259    4705 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0912 15:16:00.452022    4705 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0912 15:16:00.468123    4705 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0912 15:16:00.468235    4705 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0912 15:16:00.469648    4705 kubeadm.go:883] updating cluster {Name:running-upgrade-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50288 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0912 15:16:00.469693    4705 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0912 15:16:00.469729    4705 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0912 15:16:00.480697    4705 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0912 15:16:00.480706    4705 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0912 15:16:00.480753    4705 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0912 15:16:00.484213    4705 ssh_runner.go:195] Run: which lz4
	I0912 15:16:00.485557    4705 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 15:16:00.486764    4705 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 15:16:00.486775    4705 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0912 15:16:01.436064    4705 docker.go:649] duration metric: took 950.563ms to copy over tarball
	I0912 15:16:01.436135    4705 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 15:16:02.613530    4705 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.177408917s)
	I0912 15:16:02.613547    4705 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 15:16:02.629100    4705 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0912 15:16:02.632098    4705 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0912 15:16:02.636544    4705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 15:16:02.690813    4705 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0912 15:16:03.901990    4705 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.211192083s)
	I0912 15:16:03.902080    4705 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0912 15:16:03.919118    4705 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0912 15:16:03.919132    4705 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0912 15:16:03.919137    4705 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0912 15:16:03.923370    4705 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 15:16:03.925478    4705 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0912 15:16:03.927647    4705 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0912 15:16:03.927646    4705 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 15:16:03.929560    4705 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0912 15:16:03.929614    4705 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0912 15:16:03.930275    4705 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0912 15:16:03.930578    4705 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0912 15:16:03.931635    4705 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0912 15:16:03.933093    4705 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0912 15:16:03.933125    4705 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0912 15:16:03.933310    4705 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0912 15:16:03.934147    4705 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0912 15:16:03.934441    4705 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0912 15:16:03.935376    4705 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0912 15:16:03.936090    4705 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0912 15:16:04.358776    4705 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0912 15:16:04.372115    4705 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0912 15:16:04.372144    4705 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0912 15:16:04.372199    4705 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0912 15:16:04.384673    4705 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0912 15:16:04.385086    4705 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0912 15:16:04.393469    4705 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0912 15:16:04.396234    4705 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0912 15:16:04.396254    4705 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0912 15:16:04.396292    4705 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0912 15:16:04.402578    4705 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0912 15:16:04.411279    4705 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0912 15:16:04.411312    4705 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0912 15:16:04.411365    4705 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0912 15:16:04.413759    4705 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0912 15:16:04.423447    4705 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0912 15:16:04.423467    4705 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0912 15:16:04.423518    4705 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0912 15:16:04.425942    4705 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0912 15:16:04.426053    4705 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0912 15:16:04.430314    4705 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0912 15:16:04.433911    4705 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0912 15:16:04.433961    4705 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0912 15:16:04.433975    4705 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	W0912 15:16:04.443591    4705 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0912 15:16:04.443715    4705 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0912 15:16:04.444196    4705 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0912 15:16:04.444215    4705 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0912 15:16:04.444242    4705 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0912 15:16:04.444819    4705 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0912 15:16:04.444829    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0912 15:16:04.448440    4705 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0912 15:16:04.461638    4705 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0912 15:16:04.461785    4705 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0912 15:16:04.461801    4705 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0912 15:16:04.461842    4705 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0912 15:16:04.496198    4705 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0912 15:16:04.496260    4705 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0912 15:16:04.496276    4705 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0912 15:16:04.496279    4705 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0912 15:16:04.496342    4705 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0912 15:16:04.496380    4705 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0912 15:16:04.506333    4705 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0912 15:16:04.506358    4705 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0912 15:16:04.506370    4705 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0912 15:16:04.506472    4705 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0912 15:16:04.518989    4705 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0912 15:16:04.519017    4705 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0912 15:16:04.571466    4705 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0912 15:16:04.571481    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0912 15:16:04.667899    4705 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0912 15:16:04.769382    4705 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0912 15:16:04.769501    4705 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 15:16:04.802428    4705 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0912 15:16:04.802456    4705 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 15:16:04.802521    4705 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 15:16:04.812248    4705 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0912 15:16:04.812261    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0912 15:16:05.396717    4705 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0912 15:16:05.397069    4705 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0912 15:16:05.397434    4705 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0912 15:16:05.403306    4705 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0912 15:16:05.403376    4705 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0912 15:16:05.461411    4705 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0912 15:16:05.461425    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0912 15:16:05.791851    4705 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0912 15:16:05.791892    4705 cache_images.go:92] duration metric: took 1.872801958s to LoadCachedImages
	W0912 15:16:05.791937    4705 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0912 15:16:05.791942    4705 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0912 15:16:05.791987    4705 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-871000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 15:16:05.792079    4705 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0912 15:16:05.815138    4705 cni.go:84] Creating CNI manager for ""
	I0912 15:16:05.815153    4705 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:16:05.815159    4705 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 15:16:05.815168    4705 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-871000 NodeName:running-upgrade-871000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 15:16:05.815246    4705 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-871000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 15:16:05.815314    4705 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0912 15:16:05.821597    4705 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 15:16:05.821655    4705 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 15:16:05.827350    4705 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0912 15:16:05.833575    4705 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 15:16:05.839800    4705 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0912 15:16:05.848120    4705 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0912 15:16:05.849526    4705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 15:16:05.952662    4705 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 15:16:05.958365    4705 certs.go:68] Setting up /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/running-upgrade-871000 for IP: 10.0.2.15
	I0912 15:16:05.958374    4705 certs.go:194] generating shared ca certs ...
	I0912 15:16:05.958385    4705 certs.go:226] acquiring lock for ca certs: {Name:mkbb0c3f29ef431420fb2bc7ce1073854ddb346b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:16:05.958532    4705 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.key
	I0912 15:16:05.958565    4705 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/proxy-client-ca.key
	I0912 15:16:05.958570    4705 certs.go:256] generating profile certs ...
	I0912 15:16:05.958626    4705 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/running-upgrade-871000/client.key
	I0912 15:16:05.958643    4705 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/running-upgrade-871000/apiserver.key.3da3c7e4
	I0912 15:16:05.958656    4705 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/running-upgrade-871000/apiserver.crt.3da3c7e4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0912 15:16:06.178593    4705 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/running-upgrade-871000/apiserver.crt.3da3c7e4 ...
	I0912 15:16:06.178609    4705 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/running-upgrade-871000/apiserver.crt.3da3c7e4: {Name:mkbb996b608d8d80e55abcaa4a962b780e144de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:16:06.178908    4705 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/running-upgrade-871000/apiserver.key.3da3c7e4 ...
	I0912 15:16:06.178912    4705 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/running-upgrade-871000/apiserver.key.3da3c7e4: {Name:mked7da428b4803fcc9284c0d40745aed96585c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:16:06.179067    4705 certs.go:381] copying /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/running-upgrade-871000/apiserver.crt.3da3c7e4 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/running-upgrade-871000/apiserver.crt
	I0912 15:16:06.179211    4705 certs.go:385] copying /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/running-upgrade-871000/apiserver.key.3da3c7e4 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/running-upgrade-871000/apiserver.key
	I0912 15:16:06.179354    4705 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/running-upgrade-871000/proxy-client.key
	I0912 15:16:06.179490    4705 certs.go:484] found cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/1784.pem (1338 bytes)
	W0912 15:16:06.179515    4705 certs.go:480] ignoring /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/1784_empty.pem, impossibly tiny 0 bytes
	I0912 15:16:06.179521    4705 certs.go:484] found cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 15:16:06.179543    4705 certs.go:484] found cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem (1078 bytes)
	I0912 15:16:06.179561    4705 certs.go:484] found cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem (1123 bytes)
	I0912 15:16:06.179580    4705 certs.go:484] found cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/key.pem (1675 bytes)
	I0912 15:16:06.179626    4705 certs.go:484] found cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/files/etc/ssl/certs/17842.pem (1708 bytes)
	I0912 15:16:06.179954    4705 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 15:16:06.190620    4705 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 15:16:06.199655    4705 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 15:16:06.210900    4705 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 15:16:06.230896    4705 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/running-upgrade-871000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0912 15:16:06.241085    4705 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/running-upgrade-871000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 15:16:06.268764    4705 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/running-upgrade-871000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 15:16:06.279267    4705 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/running-upgrade-871000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 15:16:06.293723    4705 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 15:16:06.300980    4705 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/1784.pem --> /usr/share/ca-certificates/1784.pem (1338 bytes)
	I0912 15:16:06.314911    4705 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/files/etc/ssl/certs/17842.pem --> /usr/share/ca-certificates/17842.pem (1708 bytes)
	I0912 15:16:06.324095    4705 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 15:16:06.331422    4705 ssh_runner.go:195] Run: openssl version
	I0912 15:16:06.333255    4705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 15:16:06.337646    4705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 15:16:06.339803    4705 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:29 /usr/share/ca-certificates/minikubeCA.pem
	I0912 15:16:06.339830    4705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 15:16:06.343119    4705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 15:16:06.353915    4705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1784.pem && ln -fs /usr/share/ca-certificates/1784.pem /etc/ssl/certs/1784.pem"
	I0912 15:16:06.356879    4705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1784.pem
	I0912 15:16:06.358319    4705 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:44 /usr/share/ca-certificates/1784.pem
	I0912 15:16:06.358340    4705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1784.pem
	I0912 15:16:06.360441    4705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1784.pem /etc/ssl/certs/51391683.0"
	I0912 15:16:06.363200    4705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17842.pem && ln -fs /usr/share/ca-certificates/17842.pem /etc/ssl/certs/17842.pem"
	I0912 15:16:06.368229    4705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17842.pem
	I0912 15:16:06.369901    4705 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:44 /usr/share/ca-certificates/17842.pem
	I0912 15:16:06.369925    4705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17842.pem
	I0912 15:16:06.372015    4705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17842.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 15:16:06.381710    4705 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 15:16:06.383184    4705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 15:16:06.384998    4705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 15:16:06.386885    4705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 15:16:06.393642    4705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 15:16:06.395744    4705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 15:16:06.397572    4705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 15:16:06.402115    4705 kubeadm.go:392] StartCluster: {Name:running-upgrade-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50288 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0912 15:16:06.402190    4705 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0912 15:16:06.453292    4705 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 15:16:06.456725    4705 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 15:16:06.456730    4705 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 15:16:06.456751    4705 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 15:16:06.459729    4705 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 15:16:06.459968    4705 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-871000" does not appear in /Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:16:06.460022    4705 kubeconfig.go:62] /Users/jenkins/minikube-integration/19616-1259/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-871000" cluster setting kubeconfig missing "running-upgrade-871000" context setting]
	I0912 15:16:06.460152    4705 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/kubeconfig: {Name:mk048c749582c7be36b3ac030be68b87cf483b70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:16:06.461788    4705 kapi.go:59] client config for running-upgrade-871000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/running-upgrade-871000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/running-upgrade-871000/client.key", CAFile:"/Users/jenkins/minikube-integration/19616-1259/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1042713d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 15:16:06.462125    4705 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 15:16:06.465035    4705 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-871000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0912 15:16:06.465040    4705 kubeadm.go:1160] stopping kube-system containers ...
	I0912 15:16:06.465076    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0912 15:16:06.494534    4705 docker.go:483] Stopping containers: [7d3eeb6f3876 7db33cec1839 c4f21347dd41 3e74f44bd8e3 4a513e896ac2 04086a1d6c70 e318e3a83a81 6e474e4e40f8 d4a9a9f13e8a efc6493635a1 31c4fe5f5d33 b0f6d653a949 e8cbd7cb34df d131e5ec50de e5124bf0abde a045f7aeee60 67cead73caad 2a1662d040db]
	I0912 15:16:06.494604    4705 ssh_runner.go:195] Run: docker stop 7d3eeb6f3876 7db33cec1839 c4f21347dd41 3e74f44bd8e3 4a513e896ac2 04086a1d6c70 e318e3a83a81 6e474e4e40f8 d4a9a9f13e8a efc6493635a1 31c4fe5f5d33 b0f6d653a949 e8cbd7cb34df d131e5ec50de e5124bf0abde a045f7aeee60 67cead73caad 2a1662d040db
	I0912 15:16:07.400047    4705 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 15:16:07.476056    4705 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 15:16:07.487918    4705 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Sep 12 22:15 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Sep 12 22:15 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Sep 12 22:15 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Sep 12 22:15 /etc/kubernetes/scheduler.conf
	
	I0912 15:16:07.487969    4705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50288 /etc/kubernetes/admin.conf
	I0912 15:16:07.492110    4705 kubeadm.go:163] "https://control-plane.minikube.internal:50288" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50288 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0912 15:16:07.492145    4705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 15:16:07.495695    4705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50288 /etc/kubernetes/kubelet.conf
	I0912 15:16:07.498654    4705 kubeadm.go:163] "https://control-plane.minikube.internal:50288" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50288 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0912 15:16:07.498678    4705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 15:16:07.501374    4705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50288 /etc/kubernetes/controller-manager.conf
	I0912 15:16:07.504783    4705 kubeadm.go:163] "https://control-plane.minikube.internal:50288" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50288 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0912 15:16:07.504810    4705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 15:16:07.508222    4705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50288 /etc/kubernetes/scheduler.conf
	I0912 15:16:07.511545    4705 kubeadm.go:163] "https://control-plane.minikube.internal:50288" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50288 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0912 15:16:07.511568    4705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 15:16:07.514814    4705 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 15:16:07.517821    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 15:16:07.547350    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 15:16:08.025359    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 15:16:08.213328    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 15:16:08.235674    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 15:16:08.259131    4705 api_server.go:52] waiting for apiserver process to appear ...
	I0912 15:16:08.259212    4705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 15:16:08.761493    4705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 15:16:09.261265    4705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 15:16:09.265438    4705 api_server.go:72] duration metric: took 1.006336541s to wait for apiserver process to appear ...
	I0912 15:16:09.265446    4705 api_server.go:88] waiting for apiserver healthz status ...
	I0912 15:16:09.265456    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:16:14.267489    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:16:14.267524    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:16:19.267762    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:16:19.267850    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:16:24.268646    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:16:24.268699    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:16:29.269400    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:16:29.269463    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:16:34.269816    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:16:34.269872    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:16:39.271012    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:16:39.271169    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:16:44.272957    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:16:44.273002    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:16:49.274978    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:16:49.275066    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:16:54.277711    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:16:54.277856    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:16:59.278416    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:16:59.278493    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:17:04.280096    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:17:04.280146    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:17:09.282503    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:17:09.282926    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:17:09.320844    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:17:09.320984    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:17:09.341940    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:17:09.342053    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:17:09.360009    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:17:09.360086    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:17:09.378102    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:17:09.378177    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:17:09.388749    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:17:09.388813    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:17:09.399575    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:17:09.399637    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:17:09.409567    4705 logs.go:276] 0 containers: []
	W0912 15:17:09.409583    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:17:09.409633    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:17:09.419970    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:17:09.419988    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:17:09.419993    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:17:09.425122    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:17:09.425132    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:17:09.437350    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:17:09.437363    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:17:09.449526    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:17:09.449540    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:17:09.460638    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:17:09.460649    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:17:09.472660    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:17:09.472669    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:17:09.483860    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:17:09.483872    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:17:09.508804    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:17:09.508814    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:17:09.521973    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:17:09.521983    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:17:09.533612    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:17:09.533624    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:17:09.550532    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:17:09.550543    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:17:09.562056    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:17:09.562069    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:17:09.579118    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:17:09.579130    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:17:09.589917    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:17:09.589927    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:17:09.604001    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:17:09.604012    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:17:09.641019    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:17:09.641027    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:17:09.710483    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:17:09.710493    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:17:12.227348    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:17:17.229793    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:17:17.229975    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:17:17.241202    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:17:17.241276    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:17:17.251673    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:17:17.251737    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:17:17.262114    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:17:17.262181    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:17:17.272382    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:17:17.272448    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:17:17.282926    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:17:17.282982    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:17:17.293045    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:17:17.293112    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:17:17.310028    4705 logs.go:276] 0 containers: []
	W0912 15:17:17.310040    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:17:17.310098    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:17:17.320430    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:17:17.320444    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:17:17.320449    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:17:17.332290    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:17:17.332302    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:17:17.349621    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:17:17.349631    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:17:17.360913    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:17:17.360923    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:17:17.365110    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:17:17.365116    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:17:17.376561    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:17:17.376574    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:17:17.388134    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:17:17.388149    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:17:17.415188    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:17:17.415194    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:17:17.452209    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:17:17.452217    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:17:17.466220    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:17:17.466232    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:17:17.479978    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:17:17.479992    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:17:17.493852    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:17:17.493863    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:17:17.506705    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:17:17.506718    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:17:17.517723    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:17:17.517735    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:17:17.528152    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:17:17.528164    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:17:17.540108    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:17:17.540120    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:17:17.579019    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:17:17.579035    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:17:20.095633    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:17:25.098004    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:17:25.098447    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:17:25.136002    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:17:25.136189    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:17:25.158014    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:17:25.158122    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:17:25.175445    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:17:25.175525    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:17:25.188274    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:17:25.188362    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:17:25.198902    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:17:25.198972    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:17:25.209705    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:17:25.209771    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:17:25.221175    4705 logs.go:276] 0 containers: []
	W0912 15:17:25.221184    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:17:25.221244    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:17:25.232663    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:17:25.232680    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:17:25.232686    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:17:25.267063    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:17:25.267077    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:17:25.282164    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:17:25.282177    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:17:25.296239    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:17:25.296250    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:17:25.307432    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:17:25.307442    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:17:25.320731    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:17:25.320741    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:17:25.345330    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:17:25.345340    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:17:25.356700    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:17:25.356710    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:17:25.370258    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:17:25.370271    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:17:25.388452    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:17:25.388466    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:17:25.400714    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:17:25.400731    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:17:25.437879    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:17:25.437887    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:17:25.442119    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:17:25.442127    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:17:25.455680    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:17:25.455691    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:17:25.467053    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:17:25.467064    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:17:25.481541    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:17:25.481553    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:17:25.497587    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:17:25.497598    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:17:28.010291    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:17:33.013032    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:17:33.013450    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:17:33.041953    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:17:33.042083    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:17:33.060191    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:17:33.060267    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:17:33.073882    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:17:33.073968    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:17:33.085770    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:17:33.085845    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:17:33.095911    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:17:33.095995    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:17:33.106439    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:17:33.106507    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:17:33.116350    4705 logs.go:276] 0 containers: []
	W0912 15:17:33.116359    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:17:33.116430    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:17:33.127128    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:17:33.127145    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:17:33.127151    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:17:33.138564    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:17:33.138575    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:17:33.150637    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:17:33.150646    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:17:33.162696    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:17:33.162706    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:17:33.173845    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:17:33.173861    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:17:33.210763    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:17:33.210783    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:17:33.247416    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:17:33.247431    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:17:33.261307    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:17:33.261317    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:17:33.274189    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:17:33.274199    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:17:33.300749    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:17:33.300758    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:17:33.312710    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:17:33.312719    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:17:33.330214    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:17:33.330223    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:17:33.334700    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:17:33.334710    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:17:33.348187    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:17:33.348197    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:17:33.361766    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:17:33.361775    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:17:33.373189    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:17:33.373200    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:17:33.384228    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:17:33.384241    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:17:35.898325    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:17:40.899428    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:17:40.899840    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:17:40.942614    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:17:40.942741    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:17:40.963824    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:17:40.963940    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:17:40.978800    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:17:40.978875    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:17:40.991345    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:17:40.991416    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:17:41.002437    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:17:41.002500    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:17:41.012930    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:17:41.012992    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:17:41.026121    4705 logs.go:276] 0 containers: []
	W0912 15:17:41.026140    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:17:41.026197    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:17:41.037030    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:17:41.037048    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:17:41.037053    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:17:41.048226    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:17:41.048238    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:17:41.059825    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:17:41.059834    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:17:41.085001    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:17:41.085012    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:17:41.122767    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:17:41.122776    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:17:41.127413    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:17:41.127420    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:17:41.141148    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:17:41.141157    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:17:41.159018    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:17:41.159029    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:17:41.170909    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:17:41.170923    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:17:41.186091    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:17:41.186100    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:17:41.198454    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:17:41.198467    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:17:41.233634    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:17:41.233648    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:17:41.247329    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:17:41.247342    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:17:41.258890    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:17:41.258901    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:17:41.270949    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:17:41.270958    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:17:41.282582    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:17:41.282592    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:17:41.298389    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:17:41.298401    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:17:43.818140    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:17:48.820431    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:17:48.820860    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:17:48.862325    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:17:48.862461    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:17:48.886255    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:17:48.886345    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:17:48.900733    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:17:48.900815    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:17:48.912984    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:17:48.913041    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:17:48.923116    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:17:48.923178    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:17:48.933487    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:17:48.933552    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:17:48.943972    4705 logs.go:276] 0 containers: []
	W0912 15:17:48.943983    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:17:48.944034    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:17:48.955095    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:17:48.955112    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:17:48.955117    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:17:48.959707    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:17:48.959715    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:17:48.972111    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:17:48.972123    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:17:48.986088    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:17:48.986097    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:17:49.001266    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:17:49.001277    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:17:49.015371    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:17:49.015382    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:17:49.027639    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:17:49.027649    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:17:49.039069    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:17:49.039080    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:17:49.058578    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:17:49.058588    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:17:49.069710    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:17:49.069724    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:17:49.082197    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:17:49.082212    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:17:49.093704    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:17:49.093717    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:17:49.128819    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:17:49.128826    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:17:49.163837    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:17:49.163850    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:17:49.178536    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:17:49.178549    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:17:49.192661    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:17:49.192671    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:17:49.204395    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:17:49.204406    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:17:51.731049    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:17:56.733138    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:17:56.733344    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:17:56.759188    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:17:56.759305    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:17:56.776253    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:17:56.776339    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:17:56.789329    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:17:56.789398    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:17:56.800537    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:17:56.800594    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:17:56.810907    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:17:56.810962    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:17:56.821148    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:17:56.821213    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:17:56.831250    4705 logs.go:276] 0 containers: []
	W0912 15:17:56.831262    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:17:56.831312    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:17:56.841663    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:17:56.841684    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:17:56.841691    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:17:56.878682    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:17:56.878690    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:17:56.913971    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:17:56.913984    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:17:56.926133    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:17:56.926145    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:17:56.937893    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:17:56.937906    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:17:56.962801    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:17:56.962814    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:17:56.974874    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:17:56.974886    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:17:56.986053    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:17:56.986069    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:17:56.997240    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:17:56.997252    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:17:57.023165    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:17:57.023172    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:17:57.027757    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:17:57.027762    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:17:57.041253    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:17:57.041265    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:17:57.054752    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:17:57.054764    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:17:57.066521    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:17:57.066532    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:17:57.083915    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:17:57.083925    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:17:57.096111    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:17:57.096122    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:17:57.110754    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:17:57.110767    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:17:59.625425    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:18:04.626632    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:18:04.626737    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:18:04.637937    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:18:04.638005    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:18:04.649606    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:18:04.649668    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:18:04.667206    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:18:04.667304    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:18:04.678950    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:18:04.679018    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:18:04.690307    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:18:04.690374    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:18:04.703766    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:18:04.703835    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:18:04.717287    4705 logs.go:276] 0 containers: []
	W0912 15:18:04.717299    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:18:04.717360    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:18:04.728567    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:18:04.728584    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:18:04.728590    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:18:04.740859    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:18:04.740870    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:18:04.765954    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:18:04.765961    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:18:04.780854    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:18:04.780869    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:18:04.792701    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:18:04.792714    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:18:04.804708    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:18:04.804721    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:18:04.841425    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:18:04.841434    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:18:04.854437    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:18:04.854445    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:18:04.872673    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:18:04.872690    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:18:04.884903    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:18:04.884918    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:18:04.896574    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:18:04.896585    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:18:04.908150    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:18:04.908161    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:18:04.912886    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:18:04.912893    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:18:04.949218    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:18:04.949230    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:18:04.965851    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:18:04.965864    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:18:04.979824    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:18:04.979838    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:18:05.004638    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:18:05.004650    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:18:07.519597    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:18:12.521709    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:18:12.521859    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:18:12.533982    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:18:12.534053    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:18:12.545297    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:18:12.545361    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:18:12.556112    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:18:12.556181    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:18:12.568804    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:18:12.568865    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:18:12.579983    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:18:12.580043    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:18:12.592788    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:18:12.592853    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:18:12.607996    4705 logs.go:276] 0 containers: []
	W0912 15:18:12.608010    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:18:12.608061    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:18:12.619167    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:18:12.619182    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:18:12.619187    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:18:12.633609    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:18:12.633622    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:18:12.645868    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:18:12.645879    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:18:12.658913    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:18:12.658930    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:18:12.670744    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:18:12.670758    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:18:12.685341    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:18:12.685351    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:18:12.723714    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:18:12.723726    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:18:12.735317    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:18:12.735329    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:18:12.774172    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:18:12.774184    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:18:12.786399    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:18:12.786412    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:18:12.801124    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:18:12.801133    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:18:12.820138    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:18:12.820148    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:18:12.824774    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:18:12.824782    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:18:12.837606    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:18:12.837617    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:18:12.851371    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:18:12.851383    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:18:12.863796    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:18:12.863808    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:18:12.889247    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:18:12.889255    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:18:15.405909    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:18:20.408007    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:18:20.408187    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:18:20.426527    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:18:20.426611    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:18:20.440213    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:18:20.440281    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:18:20.452035    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:18:20.452091    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:18:20.462096    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:18:20.462158    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:18:20.472353    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:18:20.472408    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:18:20.490613    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:18:20.490690    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:18:20.500684    4705 logs.go:276] 0 containers: []
	W0912 15:18:20.500693    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:18:20.500750    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:18:20.511180    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:18:20.511195    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:18:20.511200    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:18:20.522783    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:18:20.522797    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:18:20.536360    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:18:20.536372    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:18:20.547656    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:18:20.547667    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:18:20.571884    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:18:20.571893    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:18:20.589154    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:18:20.589165    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:18:20.600895    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:18:20.600906    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:18:20.638589    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:18:20.638605    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:18:20.676590    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:18:20.676600    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:18:20.690479    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:18:20.690490    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:18:20.702375    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:18:20.702385    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:18:20.715625    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:18:20.715641    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:18:20.726689    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:18:20.726699    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:18:20.738789    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:18:20.738800    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:18:20.743582    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:18:20.743590    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:18:20.757281    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:18:20.757290    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:18:20.771958    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:18:20.771969    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:18:23.284901    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:18:28.286982    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:18:28.287091    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:18:28.298872    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:18:28.298962    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:18:28.309577    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:18:28.309646    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:18:28.324729    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:18:28.324797    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:18:28.335383    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:18:28.335456    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:18:28.350237    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:18:28.350302    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:18:28.360850    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:18:28.360918    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:18:28.371735    4705 logs.go:276] 0 containers: []
	W0912 15:18:28.371747    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:18:28.371807    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:18:28.382173    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:18:28.382190    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:18:28.382195    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:18:28.394849    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:18:28.394860    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:18:28.430320    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:18:28.430332    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:18:28.447924    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:18:28.447935    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:18:28.459319    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:18:28.459334    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:18:28.470900    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:18:28.470913    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:18:28.495105    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:18:28.495114    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:18:28.499610    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:18:28.499620    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:18:28.514161    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:18:28.514171    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:18:28.525667    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:18:28.525678    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:18:28.545462    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:18:28.545472    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:18:28.558650    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:18:28.558661    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:18:28.578167    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:18:28.578177    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:18:28.616940    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:18:28.616960    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:18:28.636318    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:18:28.636329    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:18:28.648534    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:18:28.648545    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:18:28.666209    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:18:28.666219    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:18:31.179879    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:18:36.181230    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:18:36.181465    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:18:36.203963    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:18:36.204070    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:18:36.218670    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:18:36.218748    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:18:36.230834    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:18:36.230908    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:18:36.247427    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:18:36.247502    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:18:36.258362    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:18:36.258430    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:18:36.268861    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:18:36.268931    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:18:36.279079    4705 logs.go:276] 0 containers: []
	W0912 15:18:36.279095    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:18:36.279153    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:18:36.289642    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:18:36.289660    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:18:36.289665    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:18:36.294120    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:18:36.294127    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:18:36.307897    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:18:36.307907    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:18:36.321036    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:18:36.321046    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:18:36.356784    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:18:36.356794    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:18:36.368031    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:18:36.368043    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:18:36.385400    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:18:36.385416    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:18:36.398257    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:18:36.398275    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:18:36.409253    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:18:36.409267    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:18:36.421071    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:18:36.421087    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:18:36.455479    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:18:36.455489    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:18:36.472875    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:18:36.472884    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:18:36.484859    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:18:36.484869    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:18:36.508722    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:18:36.508732    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:18:36.520863    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:18:36.520872    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:18:36.534352    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:18:36.534363    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:18:36.546542    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:18:36.546551    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:18:39.059915    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:18:44.062107    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:18:44.062214    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:18:44.074712    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:18:44.074786    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:18:44.086536    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:18:44.086613    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:18:44.105956    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:18:44.106030    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:18:44.117939    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:18:44.118001    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:18:44.130436    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:18:44.130520    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:18:44.141230    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:18:44.141301    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:18:44.152194    4705 logs.go:276] 0 containers: []
	W0912 15:18:44.152207    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:18:44.152276    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:18:44.163038    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:18:44.163056    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:18:44.163063    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:18:44.188433    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:18:44.188457    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:18:44.201879    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:18:44.201891    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:18:44.216972    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:18:44.216984    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:18:44.229618    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:18:44.229630    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:18:44.243264    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:18:44.243278    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:18:44.256538    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:18:44.256551    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:18:44.269179    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:18:44.269192    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:18:44.281877    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:18:44.281891    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:18:44.298320    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:18:44.298341    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:18:44.311601    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:18:44.311616    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:18:44.332302    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:18:44.332333    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:18:44.345085    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:18:44.345104    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:18:44.349942    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:18:44.349952    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:18:44.364753    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:18:44.364767    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:18:44.381062    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:18:44.381077    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:18:44.421579    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:18:44.421599    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:18:46.959409    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:18:51.960041    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:18:51.960505    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:18:51.998920    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:18:51.999061    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:18:52.020929    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:18:52.021024    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:18:52.039796    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:18:52.039876    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:18:52.051752    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:18:52.051817    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:18:52.062611    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:18:52.062681    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:18:52.073161    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:18:52.073229    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:18:52.083928    4705 logs.go:276] 0 containers: []
	W0912 15:18:52.083942    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:18:52.084003    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:18:52.094896    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:18:52.094917    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:18:52.094922    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:18:52.099373    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:18:52.099380    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:18:52.111393    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:18:52.111404    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:18:52.123649    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:18:52.123665    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:18:52.135245    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:18:52.135260    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:18:52.170738    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:18:52.170748    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:18:52.187302    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:18:52.187311    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:18:52.200924    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:18:52.200937    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:18:52.212034    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:18:52.212046    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:18:52.252764    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:18:52.252777    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:18:52.265013    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:18:52.265026    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:18:52.276681    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:18:52.276692    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:18:52.302227    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:18:52.302237    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:18:52.314647    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:18:52.314658    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:18:52.328516    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:18:52.328526    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:18:52.339723    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:18:52.339735    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:18:52.358363    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:18:52.358374    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:18:54.884389    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:18:59.887192    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:18:59.887650    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:18:59.926882    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:18:59.927016    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:18:59.955746    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:18:59.955838    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:18:59.969256    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:18:59.969330    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:18:59.981200    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:18:59.981267    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:18:59.996881    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:18:59.996941    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:19:00.007776    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:19:00.007843    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:19:00.018472    4705 logs.go:276] 0 containers: []
	W0912 15:19:00.018486    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:19:00.018549    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:19:00.028864    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:19:00.028884    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:19:00.028889    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:19:00.052032    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:19:00.052042    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:19:00.063576    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:19:00.063588    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:19:00.076948    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:19:00.076961    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:19:00.087773    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:19:00.087785    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:19:00.108365    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:19:00.108378    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:19:00.124368    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:19:00.124379    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:19:00.159651    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:19:00.159657    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:19:00.171725    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:19:00.171738    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:19:00.183564    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:19:00.183574    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:19:00.194684    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:19:00.194695    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:19:00.206375    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:19:00.206386    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:19:00.217223    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:19:00.217234    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:19:00.254158    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:19:00.254169    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:19:00.269306    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:19:00.269319    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:19:00.283251    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:19:00.283262    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:19:00.301968    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:19:00.301977    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:19:02.808477    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:19:07.809554    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:19:07.809744    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:19:07.821158    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:19:07.821237    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:19:07.835058    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:19:07.835127    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:19:07.847073    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:19:07.847149    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:19:07.858774    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:19:07.858853    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:19:07.870976    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:19:07.871051    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:19:07.883075    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:19:07.883151    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:19:07.895078    4705 logs.go:276] 0 containers: []
	W0912 15:19:07.895091    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:19:07.895151    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:19:07.907688    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:19:07.907707    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:19:07.907713    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:19:07.940642    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:19:07.940661    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:19:07.969196    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:19:07.969212    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:19:07.988899    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:19:07.988913    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:19:08.002607    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:19:08.002621    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:19:08.016942    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:19:08.016956    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:19:08.056474    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:19:08.056493    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:19:08.070471    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:19:08.070485    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:19:08.083718    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:19:08.083732    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:19:08.099027    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:19:08.099044    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:19:08.115352    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:19:08.115372    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:19:08.130192    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:19:08.130204    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:19:08.156928    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:19:08.156946    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:19:08.162415    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:19:08.162427    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:19:08.174547    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:19:08.174559    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:19:08.193191    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:19:08.193206    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:19:08.205678    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:19:08.205690    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:19:10.749236    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:19:15.751491    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:19:15.751771    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:19:15.778972    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:19:15.779080    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:19:15.796173    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:19:15.796259    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:19:15.810482    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:19:15.810556    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:19:15.821786    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:19:15.821851    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:19:15.832549    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:19:15.832616    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:19:15.847033    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:19:15.847102    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:19:15.856814    4705 logs.go:276] 0 containers: []
	W0912 15:19:15.856827    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:19:15.856887    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:19:15.867758    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:19:15.867779    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:19:15.867793    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:19:15.880044    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:19:15.880055    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:19:15.891291    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:19:15.891306    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:19:15.903229    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:19:15.903239    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:19:15.918972    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:19:15.918984    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:19:15.931905    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:19:15.931915    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:19:15.943233    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:19:15.943243    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:19:15.955758    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:19:15.955770    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:19:15.960554    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:19:15.960563    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:19:15.974643    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:19:15.974653    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:19:15.993272    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:19:15.993281    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:19:16.005557    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:19:16.005566    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:19:16.016839    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:19:16.016850    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:19:16.039742    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:19:16.039749    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:19:16.075765    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:19:16.075776    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:19:16.113126    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:19:16.113137    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:19:16.133463    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:19:16.133474    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:19:18.647686    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:19:23.649013    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:19:23.649176    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:19:23.661732    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:19:23.661815    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:19:23.672495    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:19:23.672561    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:19:23.684438    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:19:23.684505    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:19:23.695501    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:19:23.695566    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:19:23.708852    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:19:23.708923    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:19:23.721291    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:19:23.721365    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:19:23.741607    4705 logs.go:276] 0 containers: []
	W0912 15:19:23.741620    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:19:23.741680    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:19:23.752116    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:19:23.752135    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:19:23.752141    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:19:23.765889    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:19:23.765903    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:19:23.779793    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:19:23.779805    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:19:23.817845    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:19:23.817866    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:19:23.834408    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:19:23.834421    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:19:23.852560    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:19:23.852575    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:19:23.865059    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:19:23.865074    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:19:23.869395    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:19:23.869403    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:19:23.880839    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:19:23.880852    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:19:23.896251    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:19:23.896263    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:19:23.907943    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:19:23.907961    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:19:23.927507    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:19:23.927520    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:19:23.964194    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:19:23.964209    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:19:23.982737    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:19:23.982749    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:19:23.994423    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:19:23.994436    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:19:24.019517    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:19:24.019528    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:19:24.031863    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:19:24.031879    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:19:26.545977    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:19:31.548254    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:19:31.548639    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:19:31.588693    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:19:31.588825    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:19:31.614987    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:19:31.615078    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:19:31.628140    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:19:31.628216    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:19:31.639452    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:19:31.639522    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:19:31.650132    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:19:31.650203    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:19:31.660811    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:19:31.660882    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:19:31.670865    4705 logs.go:276] 0 containers: []
	W0912 15:19:31.670876    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:19:31.670937    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:19:31.681080    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:19:31.681097    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:19:31.681102    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:19:31.692541    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:19:31.692554    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:19:31.715720    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:19:31.715730    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:19:31.753063    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:19:31.753076    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:19:31.767379    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:19:31.767389    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:19:31.778975    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:19:31.778993    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:19:31.790917    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:19:31.790927    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:19:31.809609    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:19:31.809620    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:19:31.827011    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:19:31.827020    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:19:31.838812    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:19:31.838823    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:19:31.874496    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:19:31.874509    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:19:31.885759    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:19:31.885771    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:19:31.897944    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:19:31.897956    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:19:31.913352    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:19:31.913369    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:19:31.925792    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:19:31.925803    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:19:31.930532    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:19:31.930542    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:19:31.945997    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:19:31.946008    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:19:34.461997    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:19:39.464140    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:19:39.464255    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:19:39.477451    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:19:39.477531    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:19:39.488541    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:19:39.488603    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:19:39.498766    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:19:39.498832    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:19:39.509008    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:19:39.509073    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:19:39.519675    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:19:39.519739    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:19:39.530539    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:19:39.530598    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:19:39.540905    4705 logs.go:276] 0 containers: []
	W0912 15:19:39.540918    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:19:39.540971    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:19:39.551643    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:19:39.551661    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:19:39.551666    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:19:39.563035    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:19:39.563048    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:19:39.600970    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:19:39.600979    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:19:39.613098    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:19:39.613108    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:19:39.625042    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:19:39.625053    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:19:39.643101    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:19:39.643112    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:19:39.667950    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:19:39.667958    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:19:39.701342    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:19:39.701353    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:19:39.716337    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:19:39.716348    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:19:39.728115    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:19:39.728127    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:19:39.744810    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:19:39.744822    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:19:39.749052    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:19:39.749059    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:19:39.763126    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:19:39.763136    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:19:39.775696    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:19:39.775707    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:19:39.792876    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:19:39.792887    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:19:39.806588    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:19:39.806599    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:19:39.817817    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:19:39.817828    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:19:42.334015    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:19:47.336092    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:19:47.336247    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:19:47.351190    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:19:47.351277    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:19:47.363926    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:19:47.364000    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:19:47.374549    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:19:47.374616    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:19:47.385364    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:19:47.385435    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:19:47.399804    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:19:47.399879    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:19:47.410174    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:19:47.410239    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:19:47.420459    4705 logs.go:276] 0 containers: []
	W0912 15:19:47.420470    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:19:47.420532    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:19:47.430831    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:19:47.430850    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:19:47.430856    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:19:47.442931    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:19:47.442942    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:19:47.454509    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:19:47.454518    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:19:47.471901    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:19:47.471910    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:19:47.507406    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:19:47.507416    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:19:47.520130    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:19:47.520140    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:19:47.531234    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:19:47.531247    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:19:47.545502    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:19:47.545516    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:19:47.550598    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:19:47.550610    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:19:47.562253    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:19:47.562267    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:19:47.573768    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:19:47.573779    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:19:47.598739    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:19:47.598761    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:19:47.612654    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:19:47.612668    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:19:47.624353    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:19:47.624370    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:19:47.659898    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:19:47.659907    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:19:47.677612    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:19:47.677622    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:19:47.697331    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:19:47.697342    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:19:50.213389    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:19:55.215635    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:19:55.215814    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:19:55.241974    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:19:55.242055    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:19:55.254472    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:19:55.254549    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:19:55.267594    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:19:55.267664    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:19:55.279322    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:19:55.279385    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:19:55.290794    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:19:55.290864    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:19:55.302340    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:19:55.302411    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:19:55.313731    4705 logs.go:276] 0 containers: []
	W0912 15:19:55.313743    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:19:55.313801    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:19:55.325196    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:19:55.325217    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:19:55.325223    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:19:55.340187    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:19:55.340198    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:19:55.358380    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:19:55.358393    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:19:55.373750    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:19:55.373762    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:19:55.386899    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:19:55.386912    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:19:55.398879    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:19:55.398890    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:19:55.435698    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:19:55.435710    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:19:55.447675    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:19:55.447687    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:19:55.459629    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:19:55.459643    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:19:55.499162    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:19:55.499174    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:19:55.523991    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:19:55.524001    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:19:55.541105    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:19:55.541116    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:19:55.555641    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:19:55.555653    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:19:55.577055    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:19:55.577067    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:19:55.588650    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:19:55.588662    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:19:55.600591    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:19:55.600605    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:19:55.611730    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:19:55.611742    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:19:58.118263    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:03.120431    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:03.120736    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:20:03.159582    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:20:03.159712    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:20:03.180467    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:20:03.180576    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:20:03.195554    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:20:03.195628    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:20:03.208699    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:20:03.208762    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:20:03.222818    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:20:03.222886    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:20:03.233571    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:20:03.233642    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:20:03.244874    4705 logs.go:276] 0 containers: []
	W0912 15:20:03.244900    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:20:03.244960    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:20:03.255496    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:20:03.255513    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:20:03.255518    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:20:03.267869    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:20:03.267879    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:20:03.280304    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:20:03.280314    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:20:03.285919    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:20:03.285929    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:20:03.299909    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:20:03.299927    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:20:03.312200    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:20:03.312212    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:20:03.350898    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:20:03.350909    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:20:03.365058    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:20:03.365070    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:20:03.376885    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:20:03.376900    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:20:03.394467    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:20:03.394477    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:20:03.405274    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:20:03.405286    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:20:03.417207    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:20:03.417217    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:20:03.429008    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:20:03.429020    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:20:03.465523    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:20:03.465534    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:20:03.480763    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:20:03.480773    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:20:03.503369    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:20:03.503377    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:20:03.517648    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:20:03.517663    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:20:06.033671    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:11.036249    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:11.036335    4705 kubeadm.go:597] duration metric: took 4m4.586458291s to restartPrimaryControlPlane
	W0912 15:20:11.036415    4705 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0912 15:20:11.036444    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0912 15:20:12.024091    4705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 15:20:12.029382    4705 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 15:20:12.032398    4705 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 15:20:12.035340    4705 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 15:20:12.035349    4705 kubeadm.go:157] found existing configuration files:
	
	I0912 15:20:12.035373    4705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50288 /etc/kubernetes/admin.conf
	I0912 15:20:12.038002    4705 kubeadm.go:163] "https://control-plane.minikube.internal:50288" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50288 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 15:20:12.038046    4705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 15:20:12.040868    4705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50288 /etc/kubernetes/kubelet.conf
	I0912 15:20:12.044007    4705 kubeadm.go:163] "https://control-plane.minikube.internal:50288" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50288 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 15:20:12.044028    4705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 15:20:12.047152    4705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50288 /etc/kubernetes/controller-manager.conf
	I0912 15:20:12.049519    4705 kubeadm.go:163] "https://control-plane.minikube.internal:50288" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50288 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 15:20:12.049541    4705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 15:20:12.052347    4705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50288 /etc/kubernetes/scheduler.conf
	I0912 15:20:12.055408    4705 kubeadm.go:163] "https://control-plane.minikube.internal:50288" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50288 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 15:20:12.055430    4705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 15:20:12.058036    4705 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 15:20:12.074964    4705 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0912 15:20:12.074998    4705 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 15:20:12.123273    4705 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 15:20:12.123328    4705 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 15:20:12.123379    4705 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 15:20:12.174375    4705 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 15:20:12.178544    4705 out.go:235]   - Generating certificates and keys ...
	I0912 15:20:12.178662    4705 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 15:20:12.178707    4705 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 15:20:12.178743    4705 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0912 15:20:12.178778    4705 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0912 15:20:12.178887    4705 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0912 15:20:12.178918    4705 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0912 15:20:12.178949    4705 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0912 15:20:12.179020    4705 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0912 15:20:12.179057    4705 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0912 15:20:12.179135    4705 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0912 15:20:12.179159    4705 kubeadm.go:310] [certs] Using the existing "sa" key
	I0912 15:20:12.179225    4705 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 15:20:12.363496    4705 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 15:20:12.470141    4705 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 15:20:12.522050    4705 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 15:20:12.593857    4705 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 15:20:12.624257    4705 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 15:20:12.624582    4705 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 15:20:12.624667    4705 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 15:20:12.705908    4705 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 15:20:12.710095    4705 out.go:235]   - Booting up control plane ...
	I0912 15:20:12.710144    4705 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 15:20:12.710187    4705 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 15:20:12.710229    4705 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 15:20:12.710280    4705 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 15:20:12.710462    4705 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 15:20:17.215324    4705 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.505546 seconds
	I0912 15:20:17.215466    4705 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 15:20:17.221218    4705 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 15:20:17.730511    4705 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 15:20:17.730609    4705 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-871000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0912 15:20:18.239072    4705 kubeadm.go:310] [bootstrap-token] Using token: 8pmqw8.qwllb3is0gedbegr
	I0912 15:20:18.242775    4705 out.go:235]   - Configuring RBAC rules ...
	I0912 15:20:18.242871    4705 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 15:20:18.242968    4705 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 15:20:18.250058    4705 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 15:20:18.252626    4705 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 15:20:18.253849    4705 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 15:20:18.255236    4705 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 15:20:18.259844    4705 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 15:20:18.434946    4705 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0912 15:20:18.644212    4705 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0912 15:20:18.644617    4705 kubeadm.go:310] 
	I0912 15:20:18.644648    4705 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0912 15:20:18.644652    4705 kubeadm.go:310] 
	I0912 15:20:18.644696    4705 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0912 15:20:18.644704    4705 kubeadm.go:310] 
	I0912 15:20:18.644740    4705 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0912 15:20:18.644785    4705 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 15:20:18.644809    4705 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 15:20:18.644812    4705 kubeadm.go:310] 
	I0912 15:20:18.644846    4705 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0912 15:20:18.644852    4705 kubeadm.go:310] 
	I0912 15:20:18.644875    4705 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0912 15:20:18.644877    4705 kubeadm.go:310] 
	I0912 15:20:18.644912    4705 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0912 15:20:18.644958    4705 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 15:20:18.644997    4705 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 15:20:18.645004    4705 kubeadm.go:310] 
	I0912 15:20:18.645043    4705 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 15:20:18.645086    4705 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0912 15:20:18.645090    4705 kubeadm.go:310] 
	I0912 15:20:18.645135    4705 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8pmqw8.qwllb3is0gedbegr \
	I0912 15:20:18.645187    4705 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:927739ba46076d32ef09500def7ebaf4576e192a933c1b27a78721d37c8894ab \
	I0912 15:20:18.645202    4705 kubeadm.go:310] 	--control-plane 
	I0912 15:20:18.645204    4705 kubeadm.go:310] 
	I0912 15:20:18.645246    4705 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0912 15:20:18.645249    4705 kubeadm.go:310] 
	I0912 15:20:18.645292    4705 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8pmqw8.qwllb3is0gedbegr \
	I0912 15:20:18.645344    4705 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:927739ba46076d32ef09500def7ebaf4576e192a933c1b27a78721d37c8894ab 
	I0912 15:20:18.645407    4705 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 15:20:18.645414    4705 cni.go:84] Creating CNI manager for ""
	I0912 15:20:18.645422    4705 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:20:18.649773    4705 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 15:20:18.656804    4705 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 15:20:18.660008    4705 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 15:20:18.664899    4705 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 15:20:18.664974    4705 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-871000 minikube.k8s.io/updated_at=2024_09_12T15_20_18_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8 minikube.k8s.io/name=running-upgrade-871000 minikube.k8s.io/primary=true
	I0912 15:20:18.665027    4705 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 15:20:18.714162    4705 ops.go:34] apiserver oom_adj: -16
	I0912 15:20:18.714188    4705 kubeadm.go:1113] duration metric: took 49.183667ms to wait for elevateKubeSystemPrivileges
	I0912 15:20:18.714200    4705 kubeadm.go:394] duration metric: took 4m12.31916275s to StartCluster
	I0912 15:20:18.714211    4705 settings.go:142] acquiring lock: {Name:mk5a46170b8bd524e48b63472100abbce9e9531f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:20:18.714304    4705 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:20:18.714670    4705 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/kubeconfig: {Name:mk048c749582c7be36b3ac030be68b87cf483b70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:20:18.714892    4705 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:20:18.714902    4705 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0912 15:20:18.714941    4705 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-871000"
	I0912 15:20:18.714954    4705 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-871000"
	W0912 15:20:18.714957    4705 addons.go:243] addon storage-provisioner should already be in state true
	I0912 15:20:18.714969    4705 host.go:66] Checking if "running-upgrade-871000" exists ...
	I0912 15:20:18.714987    4705 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-871000"
	I0912 15:20:18.714996    4705 config.go:182] Loaded profile config "running-upgrade-871000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0912 15:20:18.715005    4705 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-871000"
	I0912 15:20:18.715821    4705 kapi.go:59] client config for running-upgrade-871000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/running-upgrade-871000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/running-upgrade-871000/client.key", CAFile:"/Users/jenkins/minikube-integration/19616-1259/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1042713d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 15:20:18.715946    4705 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-871000"
	W0912 15:20:18.715950    4705 addons.go:243] addon default-storageclass should already be in state true
	I0912 15:20:18.715957    4705 host.go:66] Checking if "running-upgrade-871000" exists ...
	I0912 15:20:18.718825    4705 out.go:177] * Verifying Kubernetes components...
	I0912 15:20:18.719120    4705 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 15:20:18.723174    4705 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 15:20:18.723181    4705 sshutil.go:53] new ssh client: &{IP:localhost Port:50256 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/running-upgrade-871000/id_rsa Username:docker}
	I0912 15:20:18.726736    4705 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 15:20:18.730759    4705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 15:20:18.733740    4705 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 15:20:18.733746    4705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 15:20:18.733752    4705 sshutil.go:53] new ssh client: &{IP:localhost Port:50256 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/running-upgrade-871000/id_rsa Username:docker}
	I0912 15:20:18.800988    4705 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 15:20:18.807096    4705 api_server.go:52] waiting for apiserver process to appear ...
	I0912 15:20:18.807144    4705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 15:20:18.810942    4705 api_server.go:72] duration metric: took 96.04175ms to wait for apiserver process to appear ...
	I0912 15:20:18.810950    4705 api_server.go:88] waiting for apiserver healthz status ...
	I0912 15:20:18.810957    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:18.822678    4705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 15:20:18.837942    4705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 15:20:19.163218    4705 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0912 15:20:19.163233    4705 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0912 15:20:23.812170    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:23.812218    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:28.812451    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:28.812475    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:33.812614    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:33.812655    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:38.812844    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:38.812885    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:43.813153    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:43.813186    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:48.813581    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:48.813632    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0912 15:20:49.164730    4705 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0912 15:20:49.172879    4705 out.go:177] * Enabled addons: storage-provisioner
	I0912 15:20:49.181884    4705 addons.go:510] duration metric: took 30.467842208s for enable addons: enabled=[storage-provisioner]
	I0912 15:20:53.814232    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:53.814271    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:58.814850    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:58.814870    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:21:03.815739    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:21:03.815781    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:21:08.821616    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:21:08.821665    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:21:13.829175    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:21:13.829206    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:21:18.835433    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:21:18.835563    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:21:18.853254    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:21:18.853332    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:21:18.871160    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:21:18.871231    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:21:18.883838    4705 logs.go:276] 2 containers: [457bfc5c142a f97fcc59df96]
	I0912 15:21:18.883908    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:21:18.895587    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:21:18.895658    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:21:18.906291    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:21:18.906355    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:21:18.916970    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:21:18.917040    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:21:18.927619    4705 logs.go:276] 0 containers: []
	W0912 15:21:18.927629    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:21:18.927686    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:21:18.938765    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:21:18.938779    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:21:18.938784    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:21:18.953915    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:21:18.953927    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:21:18.966174    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:21:18.966185    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:21:18.980967    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:21:18.980981    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:21:19.003894    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:21:19.003902    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:21:19.036128    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:21:19.036136    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:21:19.040894    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:21:19.040901    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:21:19.076075    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:21:19.076086    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:21:19.090181    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:21:19.090193    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:21:19.102169    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:21:19.102182    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:21:19.113962    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:21:19.113971    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:21:19.131734    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:21:19.131745    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:21:19.143693    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:21:19.143707    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:21:21.659145    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:21:26.663884    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:21:26.663990    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:21:26.675403    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:21:26.675482    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:21:26.686927    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:21:26.687006    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:21:26.697387    4705 logs.go:276] 2 containers: [457bfc5c142a f97fcc59df96]
	I0912 15:21:26.697459    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:21:26.708362    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:21:26.708432    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:21:26.719062    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:21:26.719129    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:21:26.730349    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:21:26.730421    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:21:26.745040    4705 logs.go:276] 0 containers: []
	W0912 15:21:26.745052    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:21:26.745116    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:21:26.755880    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:21:26.755894    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:21:26.755899    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:21:26.789925    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:21:26.789934    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:21:26.794518    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:21:26.794526    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:21:26.829500    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:21:26.829511    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:21:26.845113    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:21:26.845124    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:21:26.866803    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:21:26.866813    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:21:26.891650    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:21:26.891664    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:21:26.906451    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:21:26.906465    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:21:26.920411    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:21:26.920422    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:21:26.936376    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:21:26.936389    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:21:26.948314    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:21:26.948325    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:21:26.960453    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:21:26.960466    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:21:26.971775    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:21:26.971784    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:21:29.488669    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:21:34.492987    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:21:34.493834    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:21:34.528882    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:21:34.529011    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:21:34.547537    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:21:34.547624    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:21:34.560173    4705 logs.go:276] 2 containers: [457bfc5c142a f97fcc59df96]
	I0912 15:21:34.560247    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:21:34.570929    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:21:34.570992    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:21:34.581131    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:21:34.581205    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:21:34.591855    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:21:34.591933    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:21:34.601895    4705 logs.go:276] 0 containers: []
	W0912 15:21:34.601910    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:21:34.601969    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:21:34.612196    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:21:34.612210    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:21:34.612215    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:21:34.635230    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:21:34.635238    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:21:34.646337    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:21:34.646347    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:21:34.651273    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:21:34.651280    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:21:34.686262    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:21:34.686276    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:21:34.698227    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:21:34.698239    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:21:34.714004    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:21:34.714018    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:21:34.731831    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:21:34.731841    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:21:34.743079    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:21:34.743092    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:21:34.778177    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:21:34.778186    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:21:34.792492    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:21:34.792504    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:21:34.806370    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:21:34.806380    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:21:34.821367    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:21:34.821377    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:21:37.333354    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:21:42.334556    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:21:42.334720    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:21:42.349120    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:21:42.349199    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:21:42.361094    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:21:42.361165    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:21:42.372340    4705 logs.go:276] 2 containers: [457bfc5c142a f97fcc59df96]
	I0912 15:21:42.372409    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:21:42.382487    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:21:42.382556    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:21:42.392359    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:21:42.392442    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:21:42.402914    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:21:42.402988    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:21:42.413025    4705 logs.go:276] 0 containers: []
	W0912 15:21:42.413035    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:21:42.413091    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:21:42.423141    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:21:42.423155    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:21:42.423160    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:21:42.443129    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:21:42.443140    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:21:42.475887    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:21:42.475895    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:21:42.480509    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:21:42.480515    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:21:42.491945    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:21:42.491954    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:21:42.503545    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:21:42.503556    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:21:42.519049    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:21:42.519063    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:21:42.531084    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:21:42.531098    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:21:42.566159    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:21:42.566171    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:21:42.580487    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:21:42.580496    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:21:42.595229    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:21:42.595242    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:21:42.611601    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:21:42.611612    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:21:42.638051    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:21:42.638063    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:21:45.151732    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:21:50.154122    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:21:50.154339    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:21:50.172254    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:21:50.172340    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:21:50.186212    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:21:50.186283    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:21:50.197747    4705 logs.go:276] 2 containers: [457bfc5c142a f97fcc59df96]
	I0912 15:21:50.197812    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:21:50.208398    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:21:50.208459    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:21:50.218924    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:21:50.218988    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:21:50.229045    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:21:50.229105    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:21:50.238964    4705 logs.go:276] 0 containers: []
	W0912 15:21:50.238976    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:21:50.239037    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:21:50.249021    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:21:50.249038    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:21:50.249043    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:21:50.263033    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:21:50.263046    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:21:50.274737    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:21:50.274750    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:21:50.286080    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:21:50.286091    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:21:50.309574    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:21:50.309582    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:21:50.320634    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:21:50.320645    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:21:50.325143    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:21:50.325153    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:21:50.359483    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:21:50.359495    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:21:50.373774    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:21:50.373783    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:21:50.385342    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:21:50.385353    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:21:50.400785    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:21:50.400796    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:21:50.424479    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:21:50.424489    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:21:50.436257    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:21:50.436272    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:21:52.971424    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:21:57.974011    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:21:57.974175    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:21:57.992454    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:21:57.992548    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:21:58.005259    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:21:58.005328    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:21:58.021001    4705 logs.go:276] 2 containers: [457bfc5c142a f97fcc59df96]
	I0912 15:21:58.021072    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:21:58.032242    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:21:58.032308    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:21:58.043113    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:21:58.043187    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:21:58.053975    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:21:58.054048    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:21:58.064120    4705 logs.go:276] 0 containers: []
	W0912 15:21:58.064130    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:21:58.064184    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:21:58.074177    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:21:58.074191    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:21:58.074197    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:21:58.085966    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:21:58.085977    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:21:58.109206    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:21:58.109217    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:21:58.120538    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:21:58.120548    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:21:58.156844    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:21:58.156855    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:21:58.171137    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:21:58.171147    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:21:58.191316    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:21:58.191327    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:21:58.203259    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:21:58.203268    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:21:58.220192    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:21:58.220202    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:21:58.253040    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:21:58.253047    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:21:58.257695    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:21:58.257704    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:21:58.276291    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:21:58.276301    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:21:58.295520    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:21:58.295533    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:22:00.809201    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:22:05.811489    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:22:05.811733    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:22:05.830282    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:22:05.830373    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:22:05.843749    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:22:05.843830    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:22:05.855463    4705 logs.go:276] 2 containers: [457bfc5c142a f97fcc59df96]
	I0912 15:22:05.855533    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:22:05.866248    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:22:05.866319    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:22:05.876913    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:22:05.876990    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:22:05.888055    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:22:05.888148    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:22:05.898754    4705 logs.go:276] 0 containers: []
	W0912 15:22:05.898765    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:22:05.898824    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:22:05.909930    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:22:05.909946    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:22:05.909952    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:22:05.945537    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:22:05.945550    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:22:05.950207    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:22:05.950213    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:22:05.994675    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:22:05.994686    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:22:06.010658    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:22:06.010669    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:22:06.024949    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:22:06.024960    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:22:06.041983    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:22:06.041996    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:22:06.064956    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:22:06.064964    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:22:06.075780    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:22:06.075790    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:22:06.095525    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:22:06.095534    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:22:06.106863    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:22:06.106873    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:22:06.139701    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:22:06.139711    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:22:06.151172    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:22:06.151183    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:22:08.665249    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:22:13.667715    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:22:13.668059    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:22:13.715090    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:22:13.715191    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:22:13.732999    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:22:13.733073    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:22:13.746750    4705 logs.go:276] 2 containers: [457bfc5c142a f97fcc59df96]
	I0912 15:22:13.746817    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:22:13.758591    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:22:13.758664    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:22:13.771746    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:22:13.771811    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:22:13.783302    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:22:13.783370    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:22:13.797198    4705 logs.go:276] 0 containers: []
	W0912 15:22:13.797209    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:22:13.797266    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:22:13.807548    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:22:13.807563    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:22:13.807570    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:22:13.841939    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:22:13.841947    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:22:13.876442    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:22:13.876454    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:22:13.891928    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:22:13.891941    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:22:13.917111    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:22:13.917123    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:22:13.929472    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:22:13.929485    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:22:13.941754    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:22:13.941764    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:22:13.946318    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:22:13.946326    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:22:13.960479    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:22:13.960489    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:22:13.974885    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:22:13.974894    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:22:13.986990    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:22:13.987002    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:22:13.998233    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:22:13.998247    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:22:14.010212    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:22:14.010223    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:22:16.529958    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:22:21.532138    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:22:21.532306    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:22:21.547684    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:22:21.547760    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:22:21.560236    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:22:21.560306    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:22:21.571125    4705 logs.go:276] 2 containers: [457bfc5c142a f97fcc59df96]
	I0912 15:22:21.571189    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:22:21.581472    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:22:21.581532    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:22:21.594927    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:22:21.594990    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:22:21.606063    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:22:21.606122    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:22:21.616848    4705 logs.go:276] 0 containers: []
	W0912 15:22:21.616858    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:22:21.616916    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:22:21.628124    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:22:21.628140    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:22:21.628146    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:22:21.648628    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:22:21.648642    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:22:21.661384    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:22:21.661395    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:22:21.679177    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:22:21.679191    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:22:21.691326    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:22:21.691339    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:22:21.715887    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:22:21.715897    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:22:21.727322    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:22:21.727335    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:22:21.739076    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:22:21.739088    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:22:21.771904    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:22:21.771914    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:22:21.776157    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:22:21.776167    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:22:21.811558    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:22:21.811570    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:22:21.825463    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:22:21.825476    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:22:21.843130    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:22:21.843142    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:22:24.356462    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:22:29.358789    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:22:29.359055    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:22:29.382559    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:22:29.382664    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:22:29.398672    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:22:29.398750    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:22:29.415428    4705 logs.go:276] 2 containers: [457bfc5c142a f97fcc59df96]
	I0912 15:22:29.415508    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:22:29.425947    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:22:29.426019    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:22:29.437216    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:22:29.437280    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:22:29.448241    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:22:29.448302    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:22:29.459110    4705 logs.go:276] 0 containers: []
	W0912 15:22:29.459121    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:22:29.459186    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:22:29.473322    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:22:29.473337    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:22:29.473342    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:22:29.485181    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:22:29.485194    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:22:29.499724    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:22:29.499735    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:22:29.511049    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:22:29.511062    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:22:29.522635    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:22:29.522648    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:22:29.546981    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:22:29.546988    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:22:29.558551    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:22:29.558561    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:22:29.592739    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:22:29.592747    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:22:29.609251    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:22:29.609261    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:22:29.628983    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:22:29.628993    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:22:29.641027    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:22:29.641039    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:22:29.658547    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:22:29.658556    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:22:29.663004    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:22:29.663014    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:22:32.198656    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:22:37.200896    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:22:37.201159    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:22:37.227827    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:22:37.227944    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:22:37.244190    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:22:37.244272    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:22:37.257360    4705 logs.go:276] 4 containers: [a24810cf4146 4a385dff204c 457bfc5c142a f97fcc59df96]
	I0912 15:22:37.257442    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:22:37.269103    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:22:37.269171    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:22:37.279890    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:22:37.279960    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:22:37.290504    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:22:37.290568    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:22:37.301094    4705 logs.go:276] 0 containers: []
	W0912 15:22:37.301110    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:22:37.301168    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:22:37.311292    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:22:37.311312    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:22:37.311317    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:22:37.347043    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:22:37.347053    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:22:37.363088    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:22:37.363105    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:22:37.374845    4705 logs.go:123] Gathering logs for coredns [4a385dff204c] ...
	I0912 15:22:37.374860    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a385dff204c"
	I0912 15:22:37.386130    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:22:37.386139    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:22:37.397932    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:22:37.397942    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:22:37.412642    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:22:37.412659    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:22:37.424481    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:22:37.424492    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:22:37.449455    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:22:37.449465    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:22:37.453718    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:22:37.453725    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:22:37.472373    4705 logs.go:123] Gathering logs for coredns [a24810cf4146] ...
	I0912 15:22:37.472384    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24810cf4146"
	I0912 15:22:37.483974    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:22:37.483986    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:22:37.495715    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:22:37.495725    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:22:37.528528    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:22:37.528537    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:22:37.540751    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:22:37.540761    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:22:40.060382    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:22:45.062553    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:22:45.062759    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:22:45.081272    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:22:45.081369    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:22:45.095633    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:22:45.095705    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:22:45.107578    4705 logs.go:276] 4 containers: [a24810cf4146 4a385dff204c 457bfc5c142a f97fcc59df96]
	I0912 15:22:45.107654    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:22:45.122090    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:22:45.122159    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:22:45.133022    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:22:45.133093    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:22:45.143488    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:22:45.143553    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:22:45.153562    4705 logs.go:276] 0 containers: []
	W0912 15:22:45.153576    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:22:45.153636    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:22:45.164064    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:22:45.164081    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:22:45.164087    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:22:45.199478    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:22:45.199490    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:22:45.237029    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:22:45.237040    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:22:45.254714    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:22:45.254728    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:22:45.259976    4705 logs.go:123] Gathering logs for coredns [a24810cf4146] ...
	I0912 15:22:45.259984    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24810cf4146"
	I0912 15:22:45.271031    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:22:45.271041    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:22:45.289359    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:22:45.289371    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:22:45.301043    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:22:45.301054    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:22:45.325375    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:22:45.325385    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:22:45.337003    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:22:45.337014    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:22:45.348629    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:22:45.348640    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:22:45.361215    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:22:45.361225    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:22:45.380574    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:22:45.380584    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:22:45.394558    4705 logs.go:123] Gathering logs for coredns [4a385dff204c] ...
	I0912 15:22:45.394567    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a385dff204c"
	I0912 15:22:45.416009    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:22:45.416018    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:22:47.931914    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:22:52.934229    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:22:52.934414    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:22:52.948678    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:22:52.948758    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:22:52.960363    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:22:52.960434    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:22:52.971234    4705 logs.go:276] 4 containers: [a24810cf4146 4a385dff204c 457bfc5c142a f97fcc59df96]
	I0912 15:22:52.971302    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:22:52.981521    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:22:52.981582    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:22:52.991993    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:22:52.992055    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:22:53.002554    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:22:53.002617    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:22:53.013076    4705 logs.go:276] 0 containers: []
	W0912 15:22:53.013086    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:22:53.013138    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:22:53.023761    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:22:53.023779    4705 logs.go:123] Gathering logs for coredns [a24810cf4146] ...
	I0912 15:22:53.023784    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24810cf4146"
	I0912 15:22:53.035872    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:22:53.035883    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:22:53.047327    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:22:53.047337    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:22:53.072350    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:22:53.072358    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:22:53.106229    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:22:53.106238    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:22:53.142337    4705 logs.go:123] Gathering logs for coredns [4a385dff204c] ...
	I0912 15:22:53.142348    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a385dff204c"
	I0912 15:22:53.153947    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:22:53.153957    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:22:53.158695    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:22:53.158701    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:22:53.172989    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:22:53.173000    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:22:53.187168    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:22:53.187178    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:22:53.199082    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:22:53.199093    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:22:53.213775    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:22:53.213788    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:22:53.225881    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:22:53.225894    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:22:53.238136    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:22:53.238147    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:22:53.249660    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:22:53.249669    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:22:55.769479    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:23:00.771754    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:23:00.771959    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:23:00.786038    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:23:00.786119    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:23:00.798770    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:23:00.798838    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:23:00.809782    4705 logs.go:276] 4 containers: [a24810cf4146 4a385dff204c 457bfc5c142a f97fcc59df96]
	I0912 15:23:00.809852    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:23:00.820298    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:23:00.820360    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:23:00.830987    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:23:00.831056    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:23:00.841553    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:23:00.841616    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:23:00.852580    4705 logs.go:276] 0 containers: []
	W0912 15:23:00.852590    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:23:00.852645    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:23:00.863287    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:23:00.863304    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:23:00.863310    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:23:00.895681    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:23:00.895692    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:23:00.910266    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:23:00.910276    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:23:00.923945    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:23:00.923956    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:23:00.948649    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:23:00.948656    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:23:00.959957    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:23:00.959968    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:23:00.964495    4705 logs.go:123] Gathering logs for coredns [4a385dff204c] ...
	I0912 15:23:00.964502    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a385dff204c"
	I0912 15:23:00.976185    4705 logs.go:123] Gathering logs for coredns [a24810cf4146] ...
	I0912 15:23:00.976194    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24810cf4146"
	I0912 15:23:00.987848    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:23:00.987879    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:23:01.005379    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:23:01.005391    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:23:01.019531    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:23:01.019545    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:23:01.053306    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:23:01.053318    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:23:01.065307    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:23:01.065320    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:23:01.076982    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:23:01.076995    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:23:01.094071    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:23:01.094083    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:23:03.611759    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:23:08.614046    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:23:08.614202    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:23:08.631960    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:23:08.632046    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:23:08.645354    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:23:08.645423    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:23:08.657459    4705 logs.go:276] 4 containers: [a24810cf4146 4a385dff204c 457bfc5c142a f97fcc59df96]
	I0912 15:23:08.657526    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:23:08.667896    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:23:08.667960    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:23:08.678249    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:23:08.678312    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:23:08.688885    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:23:08.688949    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:23:08.702725    4705 logs.go:276] 0 containers: []
	W0912 15:23:08.702737    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:23:08.702792    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:23:08.713370    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:23:08.713387    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:23:08.713393    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:23:08.727302    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:23:08.727316    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:23:08.739091    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:23:08.739104    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:23:08.757006    4705 logs.go:123] Gathering logs for coredns [a24810cf4146] ...
	I0912 15:23:08.757017    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24810cf4146"
	I0912 15:23:08.769049    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:23:08.769063    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:23:08.780700    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:23:08.780711    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:23:08.786602    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:23:08.786613    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:23:08.821706    4705 logs.go:123] Gathering logs for coredns [4a385dff204c] ...
	I0912 15:23:08.821720    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a385dff204c"
	I0912 15:23:08.833395    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:23:08.833406    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:23:08.845493    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:23:08.845508    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:23:08.870633    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:23:08.870643    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:23:08.903881    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:23:08.903888    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:23:08.916178    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:23:08.916187    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:23:08.932335    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:23:08.932348    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:23:08.949464    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:23:08.949474    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:23:11.463277    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:23:16.465508    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:23:16.465683    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:23:16.486364    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:23:16.486433    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:23:16.507352    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:23:16.507436    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:23:16.522254    4705 logs.go:276] 4 containers: [a24810cf4146 4a385dff204c 457bfc5c142a f97fcc59df96]
	I0912 15:23:16.522323    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:23:16.539601    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:23:16.539673    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:23:16.550380    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:23:16.550444    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:23:16.560868    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:23:16.560934    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:23:16.570973    4705 logs.go:276] 0 containers: []
	W0912 15:23:16.570984    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:23:16.571042    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:23:16.581846    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:23:16.581864    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:23:16.581869    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:23:16.604091    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:23:16.604104    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:23:16.615816    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:23:16.615829    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:23:16.620239    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:23:16.620250    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:23:16.634071    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:23:16.634084    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:23:16.645370    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:23:16.645383    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:23:16.670619    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:23:16.670630    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:23:16.705551    4705 logs.go:123] Gathering logs for coredns [a24810cf4146] ...
	I0912 15:23:16.705561    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24810cf4146"
	I0912 15:23:16.718011    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:23:16.718021    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:23:16.729301    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:23:16.729313    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:23:16.741137    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:23:16.741147    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:23:16.775802    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:23:16.775814    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:23:16.792016    4705 logs.go:123] Gathering logs for coredns [4a385dff204c] ...
	I0912 15:23:16.792028    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a385dff204c"
	I0912 15:23:16.804622    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:23:16.804638    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:23:16.816412    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:23:16.816421    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:23:19.334425    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:23:24.335068    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:23:24.335458    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:23:24.375819    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:23:24.375931    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:23:24.392080    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:23:24.392158    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:23:24.405995    4705 logs.go:276] 4 containers: [a24810cf4146 4a385dff204c 457bfc5c142a f97fcc59df96]
	I0912 15:23:24.406070    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:23:24.417680    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:23:24.417745    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:23:24.433601    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:23:24.433669    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:23:24.444277    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:23:24.444338    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:23:24.455023    4705 logs.go:276] 0 containers: []
	W0912 15:23:24.455037    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:23:24.455092    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:23:24.465314    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:23:24.465333    4705 logs.go:123] Gathering logs for coredns [4a385dff204c] ...
	I0912 15:23:24.465338    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a385dff204c"
	I0912 15:23:24.477991    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:23:24.478001    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:23:24.489965    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:23:24.489981    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:23:24.525913    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:23:24.525929    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:23:24.530378    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:23:24.530386    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:23:24.550675    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:23:24.550685    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:23:24.565368    4705 logs.go:123] Gathering logs for coredns [a24810cf4146] ...
	I0912 15:23:24.565378    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24810cf4146"
	I0912 15:23:24.577628    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:23:24.577639    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:23:24.589374    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:23:24.589386    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:23:24.625127    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:23:24.625140    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:23:24.644427    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:23:24.644440    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:23:24.656488    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:23:24.656500    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:23:24.681640    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:23:24.681648    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:23:24.693838    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:23:24.693848    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:23:24.711649    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:23:24.711665    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:23:27.224687    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:23:32.226964    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:23:32.227132    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:23:32.239407    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:23:32.239487    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:23:32.249980    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:23:32.250050    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:23:32.261843    4705 logs.go:276] 4 containers: [a24810cf4146 4a385dff204c 457bfc5c142a f97fcc59df96]
	I0912 15:23:32.261918    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:23:32.272897    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:23:32.272960    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:23:32.283547    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:23:32.283617    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:23:32.294068    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:23:32.294137    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:23:32.315689    4705 logs.go:276] 0 containers: []
	W0912 15:23:32.315702    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:23:32.315756    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:23:32.326523    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:23:32.326541    4705 logs.go:123] Gathering logs for coredns [4a385dff204c] ...
	I0912 15:23:32.326547    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a385dff204c"
	I0912 15:23:32.337855    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:23:32.337865    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:23:32.349638    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:23:32.349649    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:23:32.388860    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:23:32.388871    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:23:32.406693    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:23:32.406703    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:23:32.419123    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:23:32.419133    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:23:32.434081    4705 logs.go:123] Gathering logs for coredns [a24810cf4146] ...
	I0912 15:23:32.434095    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24810cf4146"
	I0912 15:23:32.446515    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:23:32.446526    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:23:32.467031    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:23:32.467042    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:23:32.478940    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:23:32.478950    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:23:32.503457    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:23:32.503465    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:23:32.507546    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:23:32.507555    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:23:32.519390    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:23:32.519399    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:23:32.539988    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:23:32.540000    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:23:32.572918    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:23:32.572931    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:23:35.091569    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:23:40.092894    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:23:40.093124    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:23:40.109867    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:23:40.109952    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:23:40.125546    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:23:40.125628    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:23:40.137208    4705 logs.go:276] 4 containers: [a24810cf4146 4a385dff204c 457bfc5c142a f97fcc59df96]
	I0912 15:23:40.137284    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:23:40.147580    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:23:40.147651    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:23:40.158056    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:23:40.158124    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:23:40.170537    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:23:40.170605    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:23:40.180869    4705 logs.go:276] 0 containers: []
	W0912 15:23:40.180880    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:23:40.180935    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:23:40.190787    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:23:40.190803    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:23:40.190808    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:23:40.202240    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:23:40.202250    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:23:40.206887    4705 logs.go:123] Gathering logs for coredns [a24810cf4146] ...
	I0912 15:23:40.206895    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24810cf4146"
	I0912 15:23:40.218173    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:23:40.218183    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:23:40.233590    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:23:40.233603    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:23:40.245585    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:23:40.245599    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:23:40.263043    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:23:40.263052    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:23:40.297175    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:23:40.297183    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:23:40.331947    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:23:40.331959    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:23:40.357576    4705 logs.go:123] Gathering logs for coredns [4a385dff204c] ...
	I0912 15:23:40.357585    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a385dff204c"
	I0912 15:23:40.370118    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:23:40.370128    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:23:40.385481    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:23:40.385491    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:23:40.397632    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:23:40.397643    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:23:40.409556    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:23:40.409567    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:23:40.430727    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:23:40.430738    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:23:42.947779    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:23:47.949961    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:23:47.950150    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:23:47.970211    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:23:47.970290    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:23:47.984062    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:23:47.984140    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:23:47.996166    4705 logs.go:276] 4 containers: [a24810cf4146 4a385dff204c 457bfc5c142a f97fcc59df96]
	I0912 15:23:47.996243    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:23:48.006568    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:23:48.006640    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:23:48.017004    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:23:48.017065    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:23:48.027312    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:23:48.027383    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:23:48.037248    4705 logs.go:276] 0 containers: []
	W0912 15:23:48.037261    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:23:48.037313    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:23:48.047919    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:23:48.047937    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:23:48.047943    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:23:48.059207    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:23:48.059217    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:23:48.071085    4705 logs.go:123] Gathering logs for coredns [4a385dff204c] ...
	I0912 15:23:48.071096    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a385dff204c"
	I0912 15:23:48.083291    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:23:48.083303    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:23:48.099269    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:23:48.099277    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:23:48.110823    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:23:48.110836    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:23:48.134752    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:23:48.134761    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:23:48.169008    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:23:48.169019    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:23:48.189680    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:23:48.189696    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:23:48.208026    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:23:48.208039    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:23:48.219404    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:23:48.219417    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:23:48.223800    4705 logs.go:123] Gathering logs for coredns [a24810cf4146] ...
	I0912 15:23:48.223808    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24810cf4146"
	I0912 15:23:48.235271    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:23:48.235284    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:23:48.246539    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:23:48.246550    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:23:48.278844    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:23:48.278851    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:23:50.795468    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:23:55.797142    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:23:55.797292    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:23:55.812467    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:23:55.812546    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:23:55.825049    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:23:55.825124    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:23:55.838118    4705 logs.go:276] 4 containers: [a24810cf4146 4a385dff204c 457bfc5c142a f97fcc59df96]
	I0912 15:23:55.838208    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:23:55.849666    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:23:55.849737    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:23:55.861373    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:23:55.861440    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:23:55.872225    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:23:55.872290    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:23:55.904053    4705 logs.go:276] 0 containers: []
	W0912 15:23:55.904085    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:23:55.904170    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:23:55.925943    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:23:55.925962    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:23:55.925968    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:23:55.964828    4705 logs.go:123] Gathering logs for coredns [4a385dff204c] ...
	I0912 15:23:55.964842    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a385dff204c"
	I0912 15:23:55.977432    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:23:55.977444    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:23:55.993716    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:23:55.993734    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:23:56.006687    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:23:56.006704    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:23:56.011792    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:23:56.011805    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:23:56.024573    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:23:56.024587    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:23:56.038029    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:23:56.038041    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:23:56.072731    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:23:56.072747    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:23:56.091398    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:23:56.091413    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:23:56.111191    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:23:56.111205    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:23:56.137827    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:23:56.137844    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:23:56.153114    4705 logs.go:123] Gathering logs for coredns [a24810cf4146] ...
	I0912 15:23:56.153126    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24810cf4146"
	I0912 15:23:56.164928    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:23:56.164941    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:23:56.177344    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:23:56.177360    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:23:58.690706    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:24:03.692852    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:24:03.693100    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:24:03.710793    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:24:03.710879    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:24:03.724381    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:24:03.724455    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:24:03.735946    4705 logs.go:276] 4 containers: [a24810cf4146 4a385dff204c 457bfc5c142a f97fcc59df96]
	I0912 15:24:03.736022    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:24:03.746620    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:24:03.746686    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:24:03.757448    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:24:03.757511    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:24:03.768111    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:24:03.768178    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:24:03.778436    4705 logs.go:276] 0 containers: []
	W0912 15:24:03.778447    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:24:03.778499    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:24:03.789147    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:24:03.789165    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:24:03.789170    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:24:03.823679    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:24:03.823686    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:24:03.828528    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:24:03.828536    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:24:03.843004    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:24:03.843015    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:24:03.867310    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:24:03.867320    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:24:03.879099    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:24:03.879110    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:24:03.890283    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:24:03.890296    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:24:03.925534    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:24:03.925545    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:24:03.939866    4705 logs.go:123] Gathering logs for coredns [a24810cf4146] ...
	I0912 15:24:03.939876    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24810cf4146"
	I0912 15:24:03.951844    4705 logs.go:123] Gathering logs for coredns [4a385dff204c] ...
	I0912 15:24:03.951855    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a385dff204c"
	I0912 15:24:03.963915    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:24:03.963925    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:24:03.982211    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:24:03.982226    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:24:04.005699    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:24:04.005707    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:24:04.018505    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:24:04.018522    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:24:04.030470    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:24:04.030487    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:24:06.545693    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:24:11.547910    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:24:11.548113    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:24:11.570320    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:24:11.570408    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:24:11.585760    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:24:11.585833    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:24:11.598530    4705 logs.go:276] 4 containers: [a24810cf4146 4a385dff204c 457bfc5c142a f97fcc59df96]
	I0912 15:24:11.598609    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:24:11.609409    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:24:11.609474    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:24:11.619540    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:24:11.619611    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:24:11.630048    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:24:11.630117    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:24:11.639819    4705 logs.go:276] 0 containers: []
	W0912 15:24:11.639829    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:24:11.639884    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:24:11.651661    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:24:11.651680    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:24:11.651686    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:24:11.686029    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:24:11.686038    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:24:11.690738    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:24:11.690747    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:24:11.702329    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:24:11.702339    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:24:11.719605    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:24:11.719615    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:24:11.731962    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:24:11.731974    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:24:11.746961    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:24:11.746976    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:24:11.760726    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:24:11.760736    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:24:11.772748    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:24:11.772757    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:24:11.796496    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:24:11.796505    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:24:11.831295    4705 logs.go:123] Gathering logs for coredns [a24810cf4146] ...
	I0912 15:24:11.831308    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24810cf4146"
	I0912 15:24:11.842948    4705 logs.go:123] Gathering logs for coredns [4a385dff204c] ...
	I0912 15:24:11.842961    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a385dff204c"
	I0912 15:24:11.854801    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:24:11.854813    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:24:11.873680    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:24:11.873690    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:24:11.889518    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:24:11.889532    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:24:14.403373    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:24:19.405668    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:24:19.410447    4705 out.go:201] 
	W0912 15:24:19.413428    4705 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0912 15:24:19.413433    4705 out.go:270] * 
	* 
	W0912 15:24:19.413878    4705 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:24:19.425404    4705 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-871000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-09-12 15:24:19.519485 -0700 PDT m=+3379.971011501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-871000 -n running-upgrade-871000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-871000 -n running-upgrade-871000: exit status 2 (15.613014208s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-871000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-381000          | force-systemd-flag-381000 | jenkins | v1.34.0 | 12 Sep 24 15:14 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-236000              | force-systemd-env-236000  | jenkins | v1.34.0 | 12 Sep 24 15:14 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-236000           | force-systemd-env-236000  | jenkins | v1.34.0 | 12 Sep 24 15:14 PDT | 12 Sep 24 15:14 PDT |
	| start   | -p docker-flags-239000                | docker-flags-239000       | jenkins | v1.34.0 | 12 Sep 24 15:14 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-381000             | force-systemd-flag-381000 | jenkins | v1.34.0 | 12 Sep 24 15:14 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-381000          | force-systemd-flag-381000 | jenkins | v1.34.0 | 12 Sep 24 15:14 PDT | 12 Sep 24 15:14 PDT |
	| start   | -p cert-expiration-152000             | cert-expiration-152000    | jenkins | v1.34.0 | 12 Sep 24 15:14 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-239000 ssh               | docker-flags-239000       | jenkins | v1.34.0 | 12 Sep 24 15:14 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-239000 ssh               | docker-flags-239000       | jenkins | v1.34.0 | 12 Sep 24 15:14 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-239000                | docker-flags-239000       | jenkins | v1.34.0 | 12 Sep 24 15:14 PDT | 12 Sep 24 15:14 PDT |
	| start   | -p cert-options-450000                | cert-options-450000       | jenkins | v1.34.0 | 12 Sep 24 15:14 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-450000 ssh               | cert-options-450000       | jenkins | v1.34.0 | 12 Sep 24 15:14 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-450000 -- sudo        | cert-options-450000       | jenkins | v1.34.0 | 12 Sep 24 15:14 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-450000                | cert-options-450000       | jenkins | v1.34.0 | 12 Sep 24 15:14 PDT | 12 Sep 24 15:14 PDT |
	| start   | -p running-upgrade-871000             | minikube                  | jenkins | v1.26.0 | 12 Sep 24 15:14 PDT | 12 Sep 24 15:15 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-871000             | running-upgrade-871000    | jenkins | v1.34.0 | 12 Sep 24 15:15 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-152000             | cert-expiration-152000    | jenkins | v1.34.0 | 12 Sep 24 15:17 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-152000             | cert-expiration-152000    | jenkins | v1.34.0 | 12 Sep 24 15:17 PDT | 12 Sep 24 15:17 PDT |
	| start   | -p kubernetes-upgrade-469000          | kubernetes-upgrade-469000 | jenkins | v1.34.0 | 12 Sep 24 15:17 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-469000          | kubernetes-upgrade-469000 | jenkins | v1.34.0 | 12 Sep 24 15:17 PDT | 12 Sep 24 15:17 PDT |
	| start   | -p kubernetes-upgrade-469000          | kubernetes-upgrade-469000 | jenkins | v1.34.0 | 12 Sep 24 15:17 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-469000          | kubernetes-upgrade-469000 | jenkins | v1.34.0 | 12 Sep 24 15:18 PDT | 12 Sep 24 15:18 PDT |
	| start   | -p stopped-upgrade-841000             | minikube                  | jenkins | v1.26.0 | 12 Sep 24 15:18 PDT | 12 Sep 24 15:18 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-841000 stop           | minikube                  | jenkins | v1.26.0 | 12 Sep 24 15:18 PDT | 12 Sep 24 15:18 PDT |
	| start   | -p stopped-upgrade-841000             | stopped-upgrade-841000    | jenkins | v1.34.0 | 12 Sep 24 15:18 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 15:18:56
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 15:18:56.369893    4867 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:18:56.370075    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:18:56.370079    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:18:56.370081    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:18:56.370231    4867 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:18:56.371432    4867 out.go:352] Setting JSON to false
	I0912 15:18:56.390539    4867 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4700,"bootTime":1726174836,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:18:56.390655    4867 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:18:56.393759    4867 out.go:177] * [stopped-upgrade-841000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:18:56.399730    4867 notify.go:220] Checking for updates...
	I0912 15:18:56.399740    4867 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:18:56.403757    4867 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:18:56.406715    4867 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:18:56.413710    4867 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:18:56.416710    4867 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:18:56.419729    4867 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:18:56.423040    4867 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0912 15:18:56.426693    4867 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0912 15:18:56.429687    4867 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:18:56.433739    4867 out.go:177] * Using the qemu2 driver based on existing profile
	I0912 15:18:56.439703    4867 start.go:297] selected driver: qemu2
	I0912 15:18:56.439711    4867 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50517 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0912 15:18:56.439772    4867 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:18:56.442367    4867 cni.go:84] Creating CNI manager for ""
	I0912 15:18:56.442385    4867 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:18:56.442412    4867 start.go:340] cluster config:
	{Name:stopped-upgrade-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50517 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0912 15:18:56.442459    4867 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:18:56.449728    4867 out.go:177] * Starting "stopped-upgrade-841000" primary control-plane node in "stopped-upgrade-841000" cluster
	I0912 15:18:56.453710    4867 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0912 15:18:56.453726    4867 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0912 15:18:56.453738    4867 cache.go:56] Caching tarball of preloaded images
	I0912 15:18:56.453794    4867 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:18:56.453799    4867 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0912 15:18:56.453863    4867 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/config.json ...
	I0912 15:18:56.454336    4867 start.go:360] acquireMachinesLock for stopped-upgrade-841000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:18:56.454366    4867 start.go:364] duration metric: took 25.5µs to acquireMachinesLock for "stopped-upgrade-841000"
	I0912 15:18:56.454376    4867 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:18:56.454379    4867 fix.go:54] fixHost starting: 
	I0912 15:18:56.454485    4867 fix.go:112] recreateIfNeeded on stopped-upgrade-841000: state=Stopped err=<nil>
	W0912 15:18:56.454494    4867 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 15:18:56.462666    4867 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-841000" ...
	I0912 15:18:59.887192    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:18:59.887650    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:18:59.926882    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:18:59.927016    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:18:59.955746    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:18:59.955838    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:18:59.969256    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:18:59.969330    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:18:59.981200    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:18:59.981267    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:18:59.996881    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:18:59.996941    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:19:00.007776    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:19:00.007843    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:19:00.018472    4705 logs.go:276] 0 containers: []
	W0912 15:19:00.018486    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:19:00.018549    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:19:00.028864    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:19:00.028884    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:19:00.028889    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:19:00.052032    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:19:00.052042    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:19:00.063576    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:19:00.063588    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:19:00.076948    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:19:00.076961    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:19:00.087773    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:19:00.087785    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:19:00.108365    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:19:00.108378    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:19:00.124368    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:19:00.124379    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:19:00.159651    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:19:00.159657    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:19:00.171725    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:19:00.171738    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:18:56.466712    4867 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:18:56.466776    4867 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/stopped-upgrade-841000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/stopped-upgrade-841000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/stopped-upgrade-841000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50482-:22,hostfwd=tcp::50483-:2376,hostname=stopped-upgrade-841000 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/stopped-upgrade-841000/disk.qcow2
	I0912 15:18:56.514440    4867 main.go:141] libmachine: STDOUT: 
	I0912 15:18:56.514473    4867 main.go:141] libmachine: STDERR: 
	I0912 15:18:56.514479    4867 main.go:141] libmachine: Waiting for VM to start (ssh -p 50482 docker@127.0.0.1)...
	I0912 15:19:00.183564    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:19:00.183574    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:19:00.194684    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:19:00.194695    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:19:00.206375    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:19:00.206386    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:19:00.217223    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:19:00.217234    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:19:00.254158    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:19:00.254169    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:19:00.269306    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:19:00.269319    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:19:00.283251    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:19:00.283262    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:19:00.301968    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:19:00.301977    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:19:02.808477    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:19:07.809554    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:19:07.809744    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:19:07.821158    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:19:07.821237    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:19:07.835058    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:19:07.835127    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:19:07.847073    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:19:07.847149    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:19:07.858774    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:19:07.858853    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:19:07.870976    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:19:07.871051    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:19:07.883075    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:19:07.883151    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:19:07.895078    4705 logs.go:276] 0 containers: []
	W0912 15:19:07.895091    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:19:07.895151    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:19:07.907688    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:19:07.907707    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:19:07.907713    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:19:07.940642    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:19:07.940661    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:19:07.969196    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:19:07.969212    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:19:07.988899    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:19:07.988913    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:19:08.002607    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:19:08.002621    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:19:08.016942    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:19:08.016956    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:19:08.056474    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:19:08.056493    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:19:08.070471    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:19:08.070485    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:19:08.083718    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:19:08.083732    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:19:08.099027    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:19:08.099044    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:19:08.115352    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:19:08.115372    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:19:08.130192    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:19:08.130204    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:19:08.156928    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:19:08.156946    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:19:08.162415    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:19:08.162427    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:19:08.174547    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:19:08.174559    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:19:08.193191    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:19:08.193206    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:19:08.205678    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:19:08.205690    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:19:10.749236    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:19:15.751491    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:19:15.751771    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:19:15.778972    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:19:15.779080    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:19:15.796173    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:19:15.796259    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:19:15.810482    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:19:15.810556    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:19:15.821786    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:19:15.821851    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:19:15.832549    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:19:15.832616    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:19:15.847033    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:19:15.847102    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:19:15.856814    4705 logs.go:276] 0 containers: []
	W0912 15:19:15.856827    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:19:15.856887    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:19:15.867758    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:19:15.867779    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:19:15.867793    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:19:15.880044    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:19:15.880055    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:19:15.891291    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:19:15.891306    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:19:15.903229    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:19:15.903239    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:19:15.918972    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:19:15.918984    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:19:15.931905    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:19:15.931915    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:19:15.943233    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:19:15.943243    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:19:15.955758    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:19:15.955770    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:19:15.960554    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:19:15.960563    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:19:15.974643    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:19:15.974653    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:19:15.993272    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:19:15.993281    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:19:16.005557    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:19:16.005566    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:19:16.016839    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:19:16.016850    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:19:16.039742    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:19:16.039749    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:19:16.075765    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:19:16.075776    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:19:16.113126    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:19:16.113137    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:19:16.133463    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:19:16.133474    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:19:18.647686    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:19:17.248640    4867 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/config.json ...
	I0912 15:19:17.249175    4867 machine.go:93] provisionDockerMachine start ...
	I0912 15:19:17.249301    4867 main.go:141] libmachine: Using SSH client type: native
	I0912 15:19:17.249623    4867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d9bba0] 0x104d9e400 <nil>  [] 0s} localhost 50482 <nil> <nil>}
	I0912 15:19:17.249635    4867 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 15:19:17.334625    4867 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0912 15:19:17.334653    4867 buildroot.go:166] provisioning hostname "stopped-upgrade-841000"
	I0912 15:19:17.334736    4867 main.go:141] libmachine: Using SSH client type: native
	I0912 15:19:17.334948    4867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d9bba0] 0x104d9e400 <nil>  [] 0s} localhost 50482 <nil> <nil>}
	I0912 15:19:17.334958    4867 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-841000 && echo "stopped-upgrade-841000" | sudo tee /etc/hostname
	I0912 15:19:17.416467    4867 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-841000
	
	I0912 15:19:17.416550    4867 main.go:141] libmachine: Using SSH client type: native
	I0912 15:19:17.416721    4867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d9bba0] 0x104d9e400 <nil>  [] 0s} localhost 50482 <nil> <nil>}
	I0912 15:19:17.416735    4867 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-841000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-841000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-841000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 15:19:17.492257    4867 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 15:19:17.492272    4867 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19616-1259/.minikube CaCertPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19616-1259/.minikube}
	I0912 15:19:17.492283    4867 buildroot.go:174] setting up certificates
	I0912 15:19:17.492290    4867 provision.go:84] configureAuth start
	I0912 15:19:17.492300    4867 provision.go:143] copyHostCerts
	I0912 15:19:17.492398    4867 exec_runner.go:144] found /Users/jenkins/minikube-integration/19616-1259/.minikube/cert.pem, removing ...
	I0912 15:19:17.492411    4867 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19616-1259/.minikube/cert.pem
	I0912 15:19:17.492550    4867 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19616-1259/.minikube/cert.pem (1123 bytes)
	I0912 15:19:17.492804    4867 exec_runner.go:144] found /Users/jenkins/minikube-integration/19616-1259/.minikube/key.pem, removing ...
	I0912 15:19:17.492809    4867 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19616-1259/.minikube/key.pem
	I0912 15:19:17.492885    4867 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19616-1259/.minikube/key.pem (1675 bytes)
	I0912 15:19:17.493041    4867 exec_runner.go:144] found /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.pem, removing ...
	I0912 15:19:17.493050    4867 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.pem
	I0912 15:19:17.493127    4867 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.pem (1078 bytes)
	I0912 15:19:17.493261    4867 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-841000 san=[127.0.0.1 localhost minikube stopped-upgrade-841000]
	I0912 15:19:17.615524    4867 provision.go:177] copyRemoteCerts
	I0912 15:19:17.615558    4867 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 15:19:17.615566    4867 sshutil.go:53] new ssh client: &{IP:localhost Port:50482 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/stopped-upgrade-841000/id_rsa Username:docker}
	I0912 15:19:17.651715    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0912 15:19:17.658909    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0912 15:19:17.665998    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 15:19:17.672726    4867 provision.go:87] duration metric: took 180.436208ms to configureAuth
	I0912 15:19:17.672738    4867 buildroot.go:189] setting minikube options for container-runtime
	I0912 15:19:17.672852    4867 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0912 15:19:17.672885    4867 main.go:141] libmachine: Using SSH client type: native
	I0912 15:19:17.672978    4867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d9bba0] 0x104d9e400 <nil>  [] 0s} localhost 50482 <nil> <nil>}
	I0912 15:19:17.672982    4867 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0912 15:19:17.740900    4867 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0912 15:19:17.740910    4867 buildroot.go:70] root file system type: tmpfs
	I0912 15:19:17.740974    4867 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0912 15:19:17.741026    4867 main.go:141] libmachine: Using SSH client type: native
	I0912 15:19:17.741138    4867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d9bba0] 0x104d9e400 <nil>  [] 0s} localhost 50482 <nil> <nil>}
	I0912 15:19:17.741173    4867 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0912 15:19:17.814333    4867 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0912 15:19:17.814385    4867 main.go:141] libmachine: Using SSH client type: native
	I0912 15:19:17.814498    4867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d9bba0] 0x104d9e400 <nil>  [] 0s} localhost 50482 <nil> <nil>}
	I0912 15:19:17.814506    4867 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0912 15:19:18.158657    4867 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0912 15:19:18.158671    4867 machine.go:96] duration metric: took 909.512375ms to provisionDockerMachine
	I0912 15:19:18.158677    4867 start.go:293] postStartSetup for "stopped-upgrade-841000" (driver="qemu2")
	I0912 15:19:18.158684    4867 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 15:19:18.158746    4867 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 15:19:18.158755    4867 sshutil.go:53] new ssh client: &{IP:localhost Port:50482 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/stopped-upgrade-841000/id_rsa Username:docker}
	I0912 15:19:18.195081    4867 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 15:19:18.196364    4867 info.go:137] Remote host: Buildroot 2021.02.12
	I0912 15:19:18.196376    4867 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19616-1259/.minikube/addons for local assets ...
	I0912 15:19:18.196462    4867 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19616-1259/.minikube/files for local assets ...
	I0912 15:19:18.196582    4867 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19616-1259/.minikube/files/etc/ssl/certs/17842.pem -> 17842.pem in /etc/ssl/certs
	I0912 15:19:18.196707    4867 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 15:19:18.199457    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/files/etc/ssl/certs/17842.pem --> /etc/ssl/certs/17842.pem (1708 bytes)
	I0912 15:19:18.207080    4867 start.go:296] duration metric: took 48.394542ms for postStartSetup
	I0912 15:19:18.207093    4867 fix.go:56] duration metric: took 21.753324125s for fixHost
	I0912 15:19:18.207125    4867 main.go:141] libmachine: Using SSH client type: native
	I0912 15:19:18.207226    4867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d9bba0] 0x104d9e400 <nil>  [] 0s} localhost 50482 <nil> <nil>}
	I0912 15:19:18.207231    4867 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 15:19:18.275082    4867 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726179557.991960838
	
	I0912 15:19:18.275091    4867 fix.go:216] guest clock: 1726179557.991960838
	I0912 15:19:18.275095    4867 fix.go:229] Guest: 2024-09-12 15:19:17.991960838 -0700 PDT Remote: 2024-09-12 15:19:18.207095 -0700 PDT m=+21.870066459 (delta=-215.134162ms)
	I0912 15:19:18.275107    4867 fix.go:200] guest clock delta is within tolerance: -215.134162ms
	I0912 15:19:18.275109    4867 start.go:83] releasing machines lock for "stopped-upgrade-841000", held for 21.821351125s
	I0912 15:19:18.275180    4867 ssh_runner.go:195] Run: cat /version.json
	I0912 15:19:18.275184    4867 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 15:19:18.275187    4867 sshutil.go:53] new ssh client: &{IP:localhost Port:50482 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/stopped-upgrade-841000/id_rsa Username:docker}
	I0912 15:19:18.275200    4867 sshutil.go:53] new ssh client: &{IP:localhost Port:50482 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/stopped-upgrade-841000/id_rsa Username:docker}
	W0912 15:19:18.275826    4867 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50482: connect: connection refused
	I0912 15:19:18.275852    4867 retry.go:31] will retry after 312.47196ms: dial tcp [::1]:50482: connect: connection refused
	W0912 15:19:18.647581    4867 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0912 15:19:18.647760    4867 ssh_runner.go:195] Run: systemctl --version
	I0912 15:19:18.653011    4867 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 15:19:18.657195    4867 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 15:19:18.657255    4867 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0912 15:19:18.664030    4867 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0912 15:19:18.672901    4867 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 15:19:18.672918    4867 start.go:495] detecting cgroup driver to use...
	I0912 15:19:18.673033    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 15:19:18.684867    4867 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0912 15:19:18.689179    4867 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0912 15:19:18.692907    4867 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0912 15:19:18.692949    4867 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0912 15:19:18.696601    4867 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 15:19:18.700173    4867 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0912 15:19:18.703794    4867 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 15:19:18.707444    4867 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 15:19:18.710915    4867 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0912 15:19:18.713923    4867 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0912 15:19:18.716711    4867 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0912 15:19:18.719860    4867 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 15:19:18.722763    4867 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 15:19:18.725429    4867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 15:19:18.782192    4867 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0912 15:19:18.788434    4867 start.go:495] detecting cgroup driver to use...
	I0912 15:19:18.788520    4867 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0912 15:19:18.793770    4867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 15:19:18.798662    4867 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 15:19:18.804669    4867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 15:19:18.809417    4867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 15:19:18.813582    4867 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0912 15:19:18.867472    4867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 15:19:18.873096    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 15:19:18.878906    4867 ssh_runner.go:195] Run: which cri-dockerd
	I0912 15:19:18.880138    4867 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0912 15:19:18.883168    4867 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0912 15:19:18.888147    4867 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0912 15:19:18.943435    4867 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0912 15:19:19.028292    4867 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0912 15:19:19.028360    4867 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0912 15:19:19.033413    4867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 15:19:19.115037    4867 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0912 15:19:20.231551    4867 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.116528792s)
	I0912 15:19:20.231610    4867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0912 15:19:20.239632    4867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0912 15:19:20.244663    4867 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0912 15:19:20.302776    4867 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0912 15:19:20.366617    4867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 15:19:20.431731    4867 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0912 15:19:20.438236    4867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0912 15:19:20.442671    4867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 15:19:20.510507    4867 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0912 15:19:20.548414    4867 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0912 15:19:20.548489    4867 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0912 15:19:20.551958    4867 start.go:563] Will wait 60s for crictl version
	I0912 15:19:20.552006    4867 ssh_runner.go:195] Run: which crictl
	I0912 15:19:20.553382    4867 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 15:19:20.568034    4867 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0912 15:19:20.568100    4867 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0912 15:19:20.584420    4867 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0912 15:19:20.605251    4867 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0912 15:19:20.605320    4867 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0912 15:19:20.606632    4867 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 15:19:20.610697    4867 kubeadm.go:883] updating cluster {Name:stopped-upgrade-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50517 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0912 15:19:20.610740    4867 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0912 15:19:20.610781    4867 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0912 15:19:20.621353    4867 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0912 15:19:20.621361    4867 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0912 15:19:20.621405    4867 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0912 15:19:20.624285    4867 ssh_runner.go:195] Run: which lz4
	I0912 15:19:20.625636    4867 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 15:19:20.626929    4867 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 15:19:20.626937    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0912 15:19:23.649013    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:19:23.649176    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:19:23.661732    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:19:23.661815    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:19:23.672495    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:19:23.672561    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:19:23.684438    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:19:23.684505    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:19:23.695501    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:19:23.695566    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:19:23.708852    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:19:23.708923    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:19:23.721291    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:19:23.721365    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:19:23.741607    4705 logs.go:276] 0 containers: []
	W0912 15:19:23.741620    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:19:23.741680    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:19:23.752116    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:19:23.752135    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:19:23.752141    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:19:23.765889    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:19:23.765903    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:19:23.779793    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:19:23.779805    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:19:23.817845    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:19:23.817866    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:19:23.834408    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:19:23.834421    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:19:23.852560    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:19:23.852575    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:19:23.865059    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:19:23.865074    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:19:23.869395    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:19:23.869403    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:19:23.880839    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:19:23.880852    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:19:23.896251    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:19:23.896263    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:19:23.907943    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:19:23.907961    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:19:23.927507    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:19:23.927520    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:19:23.964194    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:19:23.964209    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:19:23.982737    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:19:23.982749    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:19:23.994423    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:19:23.994436    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:19:24.019517    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:19:24.019528    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:19:24.031863    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:19:24.031879    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:19:21.495884    4867 docker.go:649] duration metric: took 870.301625ms to copy over tarball
	I0912 15:19:21.495947    4867 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 15:19:22.662282    4867 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.1663525s)
	I0912 15:19:22.662298    4867 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 15:19:22.677506    4867 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0912 15:19:22.680373    4867 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0912 15:19:22.685525    4867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 15:19:22.755544    4867 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0912 15:19:24.418700    4867 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.663184542s)
	I0912 15:19:24.418801    4867 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0912 15:19:24.431784    4867 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0912 15:19:24.431792    4867 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0912 15:19:24.431797    4867 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0912 15:19:24.437382    4867 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 15:19:24.439424    4867 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0912 15:19:24.441091    4867 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0912 15:19:24.441104    4867 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 15:19:24.442609    4867 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0912 15:19:24.442919    4867 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0912 15:19:24.444182    4867 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0912 15:19:24.444233    4867 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0912 15:19:24.445778    4867 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0912 15:19:24.445870    4867 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0912 15:19:24.446884    4867 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0912 15:19:24.447204    4867 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0912 15:19:24.448032    4867 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0912 15:19:24.448167    4867 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0912 15:19:24.449064    4867 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0912 15:19:24.449719    4867 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0912 15:19:24.875340    4867 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0912 15:19:24.888168    4867 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0912 15:19:24.889019    4867 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0912 15:19:24.889052    4867 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0912 15:19:24.889086    4867 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0912 15:19:24.899956    4867 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	W0912 15:19:24.908776    4867 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0912 15:19:24.908913    4867 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0912 15:19:24.908978    4867 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0912 15:19:24.910092    4867 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0912 15:19:24.910112    4867 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0912 15:19:24.910142    4867 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0912 15:19:24.910166    4867 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0912 15:19:24.910300    4867 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0912 15:19:24.915343    4867 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0912 15:19:24.915362    4867 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0912 15:19:24.915408    4867 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0912 15:19:24.931487    4867 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0912 15:19:24.931503    4867 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0912 15:19:24.931551    4867 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0912 15:19:24.932102    4867 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0912 15:19:24.932424    4867 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0912 15:19:24.932435    4867 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0912 15:19:24.932464    4867 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0912 15:19:24.941386    4867 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0912 15:19:24.941416    4867 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0912 15:19:24.941431    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0912 15:19:24.941479    4867 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0912 15:19:24.952748    4867 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0912 15:19:24.954453    4867 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0912 15:19:24.954474    4867 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0912 15:19:24.954518    4867 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0912 15:19:24.958830    4867 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0912 15:19:24.958941    4867 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0912 15:19:24.960861    4867 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0912 15:19:24.960868    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0912 15:19:24.969972    4867 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0912 15:19:24.969994    4867 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0912 15:19:24.970009    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0912 15:19:24.991243    4867 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0912 15:19:25.030109    4867 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0912 15:19:25.030133    4867 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0912 15:19:25.030140    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0912 15:19:25.032719    4867 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0912 15:19:25.032737    4867 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0912 15:19:25.032789    4867 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0912 15:19:25.073946    4867 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0912 15:19:25.073979    4867 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0912 15:19:25.074093    4867 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0912 15:19:25.075465    4867 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0912 15:19:25.075477    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0912 15:19:25.287421    4867 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0912 15:19:25.287435    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0912 15:19:25.325307    4867 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0912 15:19:25.325423    4867 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 15:19:25.438303    4867 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0912 15:19:25.438333    4867 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0912 15:19:25.438360    4867 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 15:19:25.438423    4867 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 15:19:25.456195    4867 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0912 15:19:25.456308    4867 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0912 15:19:25.457868    4867 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0912 15:19:25.457880    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0912 15:19:25.486983    4867 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0912 15:19:25.486995    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0912 15:19:25.726664    4867 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0912 15:19:25.726711    4867 cache_images.go:92] duration metric: took 1.294943292s to LoadCachedImages
	W0912 15:19:25.726756    4867 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0912 15:19:25.726762    4867 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0912 15:19:25.726810    4867 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-841000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 15:19:25.726874    4867 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0912 15:19:25.745832    4867 cni.go:84] Creating CNI manager for ""
	I0912 15:19:25.745843    4867 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:19:25.745850    4867 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 15:19:25.745858    4867 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-841000 NodeName:stopped-upgrade-841000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 15:19:25.745928    4867 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-841000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 15:19:25.745997    4867 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0912 15:19:25.749109    4867 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 15:19:25.749137    4867 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 15:19:25.751834    4867 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0912 15:19:25.756868    4867 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 15:19:25.761440    4867 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0912 15:19:25.766601    4867 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0912 15:19:25.767882    4867 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 15:19:25.771662    4867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 15:19:25.850339    4867 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 15:19:25.856382    4867 certs.go:68] Setting up /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000 for IP: 10.0.2.15
	I0912 15:19:25.856390    4867 certs.go:194] generating shared ca certs ...
	I0912 15:19:25.856401    4867 certs.go:226] acquiring lock for ca certs: {Name:mkbb0c3f29ef431420fb2bc7ce1073854ddb346b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:19:25.856592    4867 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.key
	I0912 15:19:25.856645    4867 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/proxy-client-ca.key
	I0912 15:19:25.856651    4867 certs.go:256] generating profile certs ...
	I0912 15:19:25.856730    4867 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/client.key
	I0912 15:19:25.856749    4867 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/apiserver.key.2b74d301
	I0912 15:19:25.856761    4867 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/apiserver.crt.2b74d301 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0912 15:19:25.972407    4867 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/apiserver.crt.2b74d301 ...
	I0912 15:19:25.972423    4867 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/apiserver.crt.2b74d301: {Name:mk752d4681e4ba2454c43b9bc2aa12efe28c4a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:19:25.973118    4867 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/apiserver.key.2b74d301 ...
	I0912 15:19:25.973128    4867 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/apiserver.key.2b74d301: {Name:mk745635a7fb23d1c496549bf805c1c2cc9798a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:19:25.973288    4867 certs.go:381] copying /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/apiserver.crt.2b74d301 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/apiserver.crt
	I0912 15:19:25.973431    4867 certs.go:385] copying /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/apiserver.key.2b74d301 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/apiserver.key
	I0912 15:19:25.973593    4867 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/proxy-client.key
	I0912 15:19:25.973729    4867 certs.go:484] found cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/1784.pem (1338 bytes)
	W0912 15:19:25.973763    4867 certs.go:480] ignoring /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/1784_empty.pem, impossibly tiny 0 bytes
	I0912 15:19:25.973769    4867 certs.go:484] found cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 15:19:25.973793    4867 certs.go:484] found cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem (1078 bytes)
	I0912 15:19:25.973814    4867 certs.go:484] found cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem (1123 bytes)
	I0912 15:19:25.973831    4867 certs.go:484] found cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/key.pem (1675 bytes)
	I0912 15:19:25.973871    4867 certs.go:484] found cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/files/etc/ssl/certs/17842.pem (1708 bytes)
	I0912 15:19:25.974191    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 15:19:25.981196    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 15:19:25.987798    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 15:19:25.995010    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 15:19:26.002600    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0912 15:19:26.009337    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 15:19:26.015927    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 15:19:26.023208    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 15:19:26.030632    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/files/etc/ssl/certs/17842.pem --> /usr/share/ca-certificates/17842.pem (1708 bytes)
	I0912 15:19:26.037445    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 15:19:26.043960    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/1784.pem --> /usr/share/ca-certificates/1784.pem (1338 bytes)
	I0912 15:19:26.051089    4867 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 15:19:26.056383    4867 ssh_runner.go:195] Run: openssl version
	I0912 15:19:26.058309    4867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17842.pem && ln -fs /usr/share/ca-certificates/17842.pem /etc/ssl/certs/17842.pem"
	I0912 15:19:26.061255    4867 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17842.pem
	I0912 15:19:26.062585    4867 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:44 /usr/share/ca-certificates/17842.pem
	I0912 15:19:26.062602    4867 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17842.pem
	I0912 15:19:26.064498    4867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17842.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 15:19:26.067842    4867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 15:19:26.071253    4867 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 15:19:26.072888    4867 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:29 /usr/share/ca-certificates/minikubeCA.pem
	I0912 15:19:26.072909    4867 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 15:19:26.074563    4867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 15:19:26.077389    4867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1784.pem && ln -fs /usr/share/ca-certificates/1784.pem /etc/ssl/certs/1784.pem"
	I0912 15:19:26.080155    4867 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1784.pem
	I0912 15:19:26.081703    4867 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:44 /usr/share/ca-certificates/1784.pem
	I0912 15:19:26.081722    4867 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1784.pem
	I0912 15:19:26.083483    4867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1784.pem /etc/ssl/certs/51391683.0"
	I0912 15:19:26.086894    4867 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 15:19:26.088518    4867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 15:19:26.090417    4867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 15:19:26.092394    4867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 15:19:26.094288    4867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 15:19:26.096146    4867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 15:19:26.098178    4867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 15:19:26.100010    4867 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50517 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0912 15:19:26.100074    4867 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0912 15:19:26.110919    4867 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 15:19:26.114354    4867 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 15:19:26.114360    4867 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 15:19:26.114387    4867 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 15:19:26.117888    4867 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 15:19:26.118171    4867 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-841000" does not appear in /Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:19:26.118271    4867 kubeconfig.go:62] /Users/jenkins/minikube-integration/19616-1259/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-841000" cluster setting kubeconfig missing "stopped-upgrade-841000" context setting]
	I0912 15:19:26.118451    4867 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/kubeconfig: {Name:mk048c749582c7be36b3ac030be68b87cf483b70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:19:26.118910    4867 kapi.go:59] client config for stopped-upgrade-841000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/client.key", CAFile:"/Users/jenkins/minikube-integration/19616-1259/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1063653d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 15:19:26.119260    4867 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 15:19:26.122607    4867 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-841000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0912 15:19:26.122612    4867 kubeadm.go:1160] stopping kube-system containers ...
	I0912 15:19:26.122652    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0912 15:19:26.132965    4867 docker.go:483] Stopping containers: [560d61b775be d3229f85be9b ae93257a08cb bdc9dc70be85 ddfbb03a6103 0273e19b82fe 9b6f02f235a6 73bd0a6b6c8b]
	I0912 15:19:26.133033    4867 ssh_runner.go:195] Run: docker stop 560d61b775be d3229f85be9b ae93257a08cb bdc9dc70be85 ddfbb03a6103 0273e19b82fe 9b6f02f235a6 73bd0a6b6c8b
	I0912 15:19:26.143586    4867 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 15:19:26.149336    4867 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 15:19:26.152018    4867 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 15:19:26.152030    4867 kubeadm.go:157] found existing configuration files:
	
	I0912 15:19:26.152051    4867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/admin.conf
	I0912 15:19:26.154920    4867 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 15:19:26.154943    4867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 15:19:26.157697    4867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/kubelet.conf
	I0912 15:19:26.160045    4867 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 15:19:26.160076    4867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 15:19:26.162886    4867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/controller-manager.conf
	I0912 15:19:26.165395    4867 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 15:19:26.165413    4867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 15:19:26.168094    4867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/scheduler.conf
	I0912 15:19:26.171156    4867 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 15:19:26.171179    4867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 15:19:26.173939    4867 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 15:19:26.176513    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 15:19:26.197466    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 15:19:26.545977    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:19:26.757177    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 15:19:26.863945    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 15:19:26.883205    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 15:19:26.913969    4867 api_server.go:52] waiting for apiserver process to appear ...
	I0912 15:19:26.914035    4867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 15:19:27.416226    4867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 15:19:27.916117    4867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 15:19:27.920302    4867 api_server.go:72] duration metric: took 1.006361541s to wait for apiserver process to appear ...
	I0912 15:19:27.920311    4867 api_server.go:88] waiting for apiserver healthz status ...
	I0912 15:19:27.920320    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:19:31.548254    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:19:31.548639    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:19:31.588693    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:19:31.588825    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:19:31.614987    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:19:31.615078    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:19:31.628140    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:19:31.628216    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:19:31.639452    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:19:31.639522    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:19:31.650132    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:19:31.650203    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:19:31.660811    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:19:31.660882    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:19:31.670865    4705 logs.go:276] 0 containers: []
	W0912 15:19:31.670876    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:19:31.670937    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:19:31.681080    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:19:31.681097    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:19:31.681102    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:19:31.692541    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:19:31.692554    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:19:31.715720    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:19:31.715730    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:19:31.753063    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:19:31.753076    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:19:31.767379    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:19:31.767389    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:19:31.778975    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:19:31.778993    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:19:31.790917    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:19:31.790927    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:19:31.809609    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:19:31.809620    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:19:31.827011    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:19:31.827020    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:19:31.838812    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:19:31.838823    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:19:31.874496    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:19:31.874509    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:19:31.885759    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:19:31.885771    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:19:31.897944    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:19:31.897956    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:19:31.913352    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:19:31.913369    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:19:31.925792    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:19:31.925803    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:19:31.930532    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:19:31.930542    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:19:31.945997    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:19:31.946008    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:19:34.461997    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:19:32.922390    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:19:32.922422    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:19:39.464140    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:19:39.464255    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:19:39.477451    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:19:39.477531    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:19:39.488541    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:19:39.488603    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:19:39.498766    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:19:39.498832    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:19:39.509008    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:19:39.509073    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:19:39.519675    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:19:39.519739    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:19:39.530539    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:19:39.530598    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:19:39.540905    4705 logs.go:276] 0 containers: []
	W0912 15:19:39.540918    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:19:39.540971    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:19:39.551643    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:19:39.551661    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:19:39.551666    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:19:39.563035    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:19:39.563048    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:19:39.600970    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:19:39.600979    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:19:39.613098    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:19:39.613108    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:19:39.625042    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:19:39.625053    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:19:39.643101    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:19:39.643112    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:19:39.667950    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:19:39.667958    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:19:39.701342    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:19:39.701353    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:19:39.716337    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:19:39.716348    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:19:39.728115    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:19:39.728127    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:19:39.744810    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:19:39.744822    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:19:39.749052    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:19:39.749059    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:19:39.763126    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:19:39.763136    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:19:39.775696    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:19:39.775707    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:19:39.792876    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:19:39.792887    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:19:39.806588    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:19:39.806599    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:19:39.817817    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:19:39.817828    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:19:37.923048    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:19:37.923092    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:19:42.334015    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:19:42.923537    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:19:42.923593    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:19:47.336092    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:19:47.336247    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:19:47.351190    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:19:47.351277    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:19:47.363926    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:19:47.364000    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:19:47.374549    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:19:47.374616    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:19:47.385364    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:19:47.385435    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:19:47.399804    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:19:47.399879    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:19:47.410174    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:19:47.410239    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:19:47.420459    4705 logs.go:276] 0 containers: []
	W0912 15:19:47.420470    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:19:47.420532    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:19:47.430831    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:19:47.430850    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:19:47.430856    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:19:47.442931    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:19:47.442942    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:19:47.454509    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:19:47.454518    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:19:47.471901    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:19:47.471910    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:19:47.507406    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:19:47.507416    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:19:47.520130    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:19:47.520140    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:19:47.531234    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:19:47.531247    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:19:47.545502    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:19:47.545516    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:19:47.550598    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:19:47.550610    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:19:47.562253    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:19:47.562267    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:19:47.573768    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:19:47.573779    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:19:47.598739    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:19:47.598761    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:19:47.612654    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:19:47.612668    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:19:47.624353    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:19:47.624370    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:19:47.659898    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:19:47.659907    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:19:47.677612    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:19:47.677622    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:19:47.697331    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:19:47.697342    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:19:47.924294    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:19:47.924326    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:19:50.213389    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:19:52.925066    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:19:52.925120    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:19:55.215635    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:19:55.215814    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:19:55.241974    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:19:55.242055    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:19:55.254472    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:19:55.254549    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:19:55.267594    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:19:55.267664    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:19:55.279322    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:19:55.279385    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:19:55.290794    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:19:55.290864    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:19:55.302340    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:19:55.302411    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:19:55.313731    4705 logs.go:276] 0 containers: []
	W0912 15:19:55.313743    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:19:55.313801    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:19:55.325196    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:19:55.325217    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:19:55.325223    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:19:55.340187    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:19:55.340198    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:19:55.358380    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:19:55.358393    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:19:55.373750    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:19:55.373762    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:19:55.386899    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:19:55.386912    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:19:55.398879    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:19:55.398890    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:19:55.435698    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:19:55.435710    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:19:55.447675    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:19:55.447687    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:19:55.459629    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:19:55.459643    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:19:55.499162    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:19:55.499174    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:19:55.523991    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:19:55.524001    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:19:55.541105    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:19:55.541116    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:19:55.555641    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:19:55.555653    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:19:55.577055    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:19:55.577067    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:19:55.588650    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:19:55.588662    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:19:55.600591    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:19:55.600605    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:19:55.611730    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:19:55.611742    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:19:58.118263    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:19:57.926257    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:19:57.926305    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:03.120431    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:03.120736    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:20:03.159582    4705 logs.go:276] 2 containers: [9106d524a2ca c4f21347dd41]
	I0912 15:20:03.159712    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:20:03.180467    4705 logs.go:276] 2 containers: [8ee288dae597 7d3eeb6f3876]
	I0912 15:20:03.180576    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:20:03.195554    4705 logs.go:276] 1 containers: [a417b064a860]
	I0912 15:20:03.195628    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:20:03.208699    4705 logs.go:276] 2 containers: [a189dd704fb2 e8cbd7cb34df]
	I0912 15:20:03.208762    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:20:03.222818    4705 logs.go:276] 1 containers: [03963541195f]
	I0912 15:20:03.222886    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:20:03.233571    4705 logs.go:276] 2 containers: [eae94b72c6d8 7db33cec1839]
	I0912 15:20:03.233642    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:20:03.244874    4705 logs.go:276] 0 containers: []
	W0912 15:20:03.244900    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:20:03.244960    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:20:03.255496    4705 logs.go:276] 2 containers: [b62f340bc8c8 500fc77adfe6]
	I0912 15:20:03.255513    4705 logs.go:123] Gathering logs for coredns [a417b064a860] ...
	I0912 15:20:03.255518    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a417b064a860"
	I0912 15:20:03.267869    4705 logs.go:123] Gathering logs for kube-scheduler [e8cbd7cb34df] ...
	I0912 15:20:03.267879    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cbd7cb34df"
	I0912 15:20:03.280304    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:20:03.280314    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:20:03.285919    4705 logs.go:123] Gathering logs for etcd [7d3eeb6f3876] ...
	I0912 15:20:03.285929    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d3eeb6f3876"
	I0912 15:20:03.299909    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:20:03.299927    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:20:03.312200    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:20:03.312212    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:20:03.350898    4705 logs.go:123] Gathering logs for kube-apiserver [9106d524a2ca] ...
	I0912 15:20:03.350909    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9106d524a2ca"
	I0912 15:20:03.365058    4705 logs.go:123] Gathering logs for kube-proxy [03963541195f] ...
	I0912 15:20:03.365070    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03963541195f"
	I0912 15:20:03.376885    4705 logs.go:123] Gathering logs for kube-controller-manager [eae94b72c6d8] ...
	I0912 15:20:03.376900    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae94b72c6d8"
	I0912 15:20:03.394467    4705 logs.go:123] Gathering logs for kube-controller-manager [7db33cec1839] ...
	I0912 15:20:03.394477    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7db33cec1839"
	I0912 15:20:03.405274    4705 logs.go:123] Gathering logs for storage-provisioner [b62f340bc8c8] ...
	I0912 15:20:03.405286    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b62f340bc8c8"
	I0912 15:20:03.417207    4705 logs.go:123] Gathering logs for storage-provisioner [500fc77adfe6] ...
	I0912 15:20:03.417217    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500fc77adfe6"
	I0912 15:20:03.429008    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:20:03.429020    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:20:03.465523    4705 logs.go:123] Gathering logs for kube-apiserver [c4f21347dd41] ...
	I0912 15:20:03.465534    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4f21347dd41"
	I0912 15:20:03.480763    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:20:03.480773    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:20:03.503369    4705 logs.go:123] Gathering logs for etcd [8ee288dae597] ...
	I0912 15:20:03.503377    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee288dae597"
	I0912 15:20:03.517648    4705 logs.go:123] Gathering logs for kube-scheduler [a189dd704fb2] ...
	I0912 15:20:03.517663    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a189dd704fb2"
	I0912 15:20:02.927854    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:02.927906    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:06.033671    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:07.929864    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:07.929906    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:11.036249    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:11.036335    4705 kubeadm.go:597] duration metric: took 4m4.586458291s to restartPrimaryControlPlane
	W0912 15:20:11.036415    4705 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0912 15:20:11.036444    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0912 15:20:12.024091    4705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 15:20:12.029382    4705 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 15:20:12.032398    4705 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 15:20:12.035340    4705 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 15:20:12.035349    4705 kubeadm.go:157] found existing configuration files:
	
	I0912 15:20:12.035373    4705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50288 /etc/kubernetes/admin.conf
	I0912 15:20:12.038002    4705 kubeadm.go:163] "https://control-plane.minikube.internal:50288" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50288 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 15:20:12.038046    4705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 15:20:12.040868    4705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50288 /etc/kubernetes/kubelet.conf
	I0912 15:20:12.044007    4705 kubeadm.go:163] "https://control-plane.minikube.internal:50288" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50288 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 15:20:12.044028    4705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 15:20:12.047152    4705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50288 /etc/kubernetes/controller-manager.conf
	I0912 15:20:12.049519    4705 kubeadm.go:163] "https://control-plane.minikube.internal:50288" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50288 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 15:20:12.049541    4705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 15:20:12.052347    4705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50288 /etc/kubernetes/scheduler.conf
	I0912 15:20:12.055408    4705 kubeadm.go:163] "https://control-plane.minikube.internal:50288" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50288 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 15:20:12.055430    4705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 15:20:12.058036    4705 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 15:20:12.074964    4705 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0912 15:20:12.074998    4705 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 15:20:12.123273    4705 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 15:20:12.123328    4705 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 15:20:12.123379    4705 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 15:20:12.174375    4705 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 15:20:12.178544    4705 out.go:235]   - Generating certificates and keys ...
	I0912 15:20:12.178662    4705 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 15:20:12.178707    4705 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 15:20:12.178743    4705 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0912 15:20:12.178778    4705 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0912 15:20:12.178887    4705 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0912 15:20:12.178918    4705 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0912 15:20:12.178949    4705 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0912 15:20:12.179020    4705 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0912 15:20:12.179057    4705 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0912 15:20:12.179135    4705 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0912 15:20:12.179159    4705 kubeadm.go:310] [certs] Using the existing "sa" key
	I0912 15:20:12.179225    4705 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 15:20:12.363496    4705 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 15:20:12.470141    4705 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 15:20:12.522050    4705 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 15:20:12.593857    4705 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 15:20:12.624257    4705 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 15:20:12.624582    4705 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 15:20:12.624667    4705 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 15:20:12.705908    4705 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 15:20:12.710095    4705 out.go:235]   - Booting up control plane ...
	I0912 15:20:12.710144    4705 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 15:20:12.710187    4705 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 15:20:12.710229    4705 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 15:20:12.710280    4705 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 15:20:12.710462    4705 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 15:20:12.932032    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:12.932054    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:17.215324    4705 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.505546 seconds
	I0912 15:20:17.215466    4705 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 15:20:17.221218    4705 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 15:20:17.730511    4705 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 15:20:17.730609    4705 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-871000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0912 15:20:18.239072    4705 kubeadm.go:310] [bootstrap-token] Using token: 8pmqw8.qwllb3is0gedbegr
	I0912 15:20:18.242775    4705 out.go:235]   - Configuring RBAC rules ...
	I0912 15:20:18.242871    4705 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 15:20:18.242968    4705 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 15:20:18.250058    4705 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 15:20:18.252626    4705 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 15:20:18.253849    4705 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 15:20:18.255236    4705 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 15:20:18.259844    4705 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 15:20:18.434946    4705 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0912 15:20:18.644212    4705 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0912 15:20:18.644617    4705 kubeadm.go:310] 
	I0912 15:20:18.644648    4705 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0912 15:20:18.644652    4705 kubeadm.go:310] 
	I0912 15:20:18.644696    4705 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0912 15:20:18.644704    4705 kubeadm.go:310] 
	I0912 15:20:18.644740    4705 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0912 15:20:18.644785    4705 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 15:20:18.644809    4705 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 15:20:18.644812    4705 kubeadm.go:310] 
	I0912 15:20:18.644846    4705 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0912 15:20:18.644852    4705 kubeadm.go:310] 
	I0912 15:20:18.644875    4705 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0912 15:20:18.644877    4705 kubeadm.go:310] 
	I0912 15:20:18.644912    4705 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0912 15:20:18.644958    4705 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 15:20:18.644997    4705 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 15:20:18.645004    4705 kubeadm.go:310] 
	I0912 15:20:18.645043    4705 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 15:20:18.645086    4705 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0912 15:20:18.645090    4705 kubeadm.go:310] 
	I0912 15:20:18.645135    4705 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8pmqw8.qwllb3is0gedbegr \
	I0912 15:20:18.645187    4705 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:927739ba46076d32ef09500def7ebaf4576e192a933c1b27a78721d37c8894ab \
	I0912 15:20:18.645202    4705 kubeadm.go:310] 	--control-plane 
	I0912 15:20:18.645204    4705 kubeadm.go:310] 
	I0912 15:20:18.645246    4705 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0912 15:20:18.645249    4705 kubeadm.go:310] 
	I0912 15:20:18.645292    4705 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8pmqw8.qwllb3is0gedbegr \
	I0912 15:20:18.645344    4705 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:927739ba46076d32ef09500def7ebaf4576e192a933c1b27a78721d37c8894ab 
	I0912 15:20:18.645407    4705 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 15:20:18.645414    4705 cni.go:84] Creating CNI manager for ""
	I0912 15:20:18.645422    4705 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:20:18.649773    4705 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 15:20:18.656804    4705 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 15:20:18.660008    4705 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 15:20:18.664899    4705 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 15:20:18.664974    4705 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-871000 minikube.k8s.io/updated_at=2024_09_12T15_20_18_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8 minikube.k8s.io/name=running-upgrade-871000 minikube.k8s.io/primary=true
	I0912 15:20:18.665027    4705 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 15:20:18.714162    4705 ops.go:34] apiserver oom_adj: -16
	I0912 15:20:18.714188    4705 kubeadm.go:1113] duration metric: took 49.183667ms to wait for elevateKubeSystemPrivileges
	I0912 15:20:18.714200    4705 kubeadm.go:394] duration metric: took 4m12.31916275s to StartCluster
	I0912 15:20:18.714211    4705 settings.go:142] acquiring lock: {Name:mk5a46170b8bd524e48b63472100abbce9e9531f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:20:18.714304    4705 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:20:18.714670    4705 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/kubeconfig: {Name:mk048c749582c7be36b3ac030be68b87cf483b70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:20:18.714892    4705 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:20:18.714902    4705 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0912 15:20:18.714941    4705 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-871000"
	I0912 15:20:18.714954    4705 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-871000"
	W0912 15:20:18.714957    4705 addons.go:243] addon storage-provisioner should already be in state true
	I0912 15:20:18.714969    4705 host.go:66] Checking if "running-upgrade-871000" exists ...
	I0912 15:20:18.714987    4705 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-871000"
	I0912 15:20:18.714996    4705 config.go:182] Loaded profile config "running-upgrade-871000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0912 15:20:18.715005    4705 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-871000"
	I0912 15:20:18.715821    4705 kapi.go:59] client config for running-upgrade-871000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/running-upgrade-871000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/running-upgrade-871000/client.key", CAFile:"/Users/jenkins/minikube-integration/19616-1259/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1042713d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 15:20:18.715946    4705 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-871000"
	W0912 15:20:18.715950    4705 addons.go:243] addon default-storageclass should already be in state true
	I0912 15:20:18.715957    4705 host.go:66] Checking if "running-upgrade-871000" exists ...
	I0912 15:20:18.718825    4705 out.go:177] * Verifying Kubernetes components...
	I0912 15:20:18.719120    4705 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 15:20:18.723174    4705 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 15:20:18.723181    4705 sshutil.go:53] new ssh client: &{IP:localhost Port:50256 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/running-upgrade-871000/id_rsa Username:docker}
	I0912 15:20:18.726736    4705 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 15:20:18.730759    4705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 15:20:18.733740    4705 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 15:20:18.733746    4705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 15:20:18.733752    4705 sshutil.go:53] new ssh client: &{IP:localhost Port:50256 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/running-upgrade-871000/id_rsa Username:docker}
	I0912 15:20:18.800988    4705 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 15:20:18.807096    4705 api_server.go:52] waiting for apiserver process to appear ...
	I0912 15:20:18.807144    4705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 15:20:18.810942    4705 api_server.go:72] duration metric: took 96.04175ms to wait for apiserver process to appear ...
	I0912 15:20:18.810950    4705 api_server.go:88] waiting for apiserver healthz status ...
	I0912 15:20:18.810957    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:18.822678    4705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 15:20:18.837942    4705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 15:20:19.163218    4705 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0912 15:20:19.163233    4705 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0912 15:20:17.934118    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:17.934144    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:23.812170    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:23.812218    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:22.936239    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:22.936311    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:28.812451    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:28.812475    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:27.938698    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:27.938799    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:20:27.950434    4867 logs.go:276] 2 containers: [eb0dc5acb005 bdc9dc70be85]
	I0912 15:20:27.950506    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:20:27.961302    4867 logs.go:276] 2 containers: [122be89153d2 d3229f85be9b]
	I0912 15:20:27.961368    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:20:27.971903    4867 logs.go:276] 1 containers: [7cc43947deca]
	I0912 15:20:27.971977    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:20:27.985860    4867 logs.go:276] 2 containers: [a3bda796bcce ae93257a08cb]
	I0912 15:20:27.985927    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:20:27.996661    4867 logs.go:276] 1 containers: [f11c4c968a9a]
	I0912 15:20:27.996725    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:20:28.007240    4867 logs.go:276] 2 containers: [f30094d831c4 560d61b775be]
	I0912 15:20:28.007298    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:20:28.018520    4867 logs.go:276] 0 containers: []
	W0912 15:20:28.018531    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:20:28.018588    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:20:28.029483    4867 logs.go:276] 2 containers: [83b1123d4f7f c39b454144bf]
	I0912 15:20:28.029507    4867 logs.go:123] Gathering logs for storage-provisioner [c39b454144bf] ...
	I0912 15:20:28.029514    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39b454144bf"
	I0912 15:20:28.040385    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:20:28.040396    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:20:28.052670    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:20:28.052685    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:20:28.056763    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:20:28.056770    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:20:28.131497    4867 logs.go:123] Gathering logs for kube-scheduler [a3bda796bcce] ...
	I0912 15:20:28.131509    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bda796bcce"
	I0912 15:20:28.143306    4867 logs.go:123] Gathering logs for kube-scheduler [ae93257a08cb] ...
	I0912 15:20:28.143318    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae93257a08cb"
	I0912 15:20:28.159200    4867 logs.go:123] Gathering logs for kube-controller-manager [f30094d831c4] ...
	I0912 15:20:28.159212    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30094d831c4"
	I0912 15:20:28.177651    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:20:28.177661    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:20:28.214201    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:20:28.214301    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:20:28.215657    4867 logs.go:123] Gathering logs for kube-apiserver [bdc9dc70be85] ...
	I0912 15:20:28.215665    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc9dc70be85"
	I0912 15:20:28.257052    4867 logs.go:123] Gathering logs for etcd [122be89153d2] ...
	I0912 15:20:28.257068    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 122be89153d2"
	I0912 15:20:28.272074    4867 logs.go:123] Gathering logs for coredns [7cc43947deca] ...
	I0912 15:20:28.272085    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc43947deca"
	I0912 15:20:28.283014    4867 logs.go:123] Gathering logs for kube-controller-manager [560d61b775be] ...
	I0912 15:20:28.283027    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560d61b775be"
	I0912 15:20:28.297989    4867 logs.go:123] Gathering logs for storage-provisioner [83b1123d4f7f] ...
	I0912 15:20:28.297999    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b1123d4f7f"
	I0912 15:20:28.309556    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:20:28.309568    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:20:28.335448    4867 logs.go:123] Gathering logs for kube-apiserver [eb0dc5acb005] ...
	I0912 15:20:28.335459    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0dc5acb005"
	I0912 15:20:28.350726    4867 logs.go:123] Gathering logs for kube-proxy [f11c4c968a9a] ...
	I0912 15:20:28.350740    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11c4c968a9a"
	I0912 15:20:28.361800    4867 logs.go:123] Gathering logs for etcd [d3229f85be9b] ...
	I0912 15:20:28.361811    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3229f85be9b"
	I0912 15:20:28.376990    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:20:28.377000    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:20:28.377031    4867 out.go:270] X Problems detected in kubelet:
	W0912 15:20:28.377035    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:20:28.377038    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:20:28.377042    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:20:28.377045    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:20:33.812614    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:33.812655    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:38.812844    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:38.812885    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:38.380925    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:43.813153    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:43.813186    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:43.383236    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:43.383458    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:20:43.410652    4867 logs.go:276] 2 containers: [eb0dc5acb005 bdc9dc70be85]
	I0912 15:20:43.410752    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:20:43.429435    4867 logs.go:276] 2 containers: [122be89153d2 d3229f85be9b]
	I0912 15:20:43.429507    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:20:43.440548    4867 logs.go:276] 1 containers: [7cc43947deca]
	I0912 15:20:43.440622    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:20:43.452002    4867 logs.go:276] 2 containers: [a3bda796bcce ae93257a08cb]
	I0912 15:20:43.452070    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:20:43.462496    4867 logs.go:276] 1 containers: [f11c4c968a9a]
	I0912 15:20:43.462574    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:20:43.473393    4867 logs.go:276] 2 containers: [f30094d831c4 560d61b775be]
	I0912 15:20:43.473458    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:20:43.483841    4867 logs.go:276] 0 containers: []
	W0912 15:20:43.483853    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:20:43.483912    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:20:43.497638    4867 logs.go:276] 2 containers: [83b1123d4f7f c39b454144bf]
	I0912 15:20:43.497659    4867 logs.go:123] Gathering logs for storage-provisioner [83b1123d4f7f] ...
	I0912 15:20:43.497665    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b1123d4f7f"
	I0912 15:20:43.509183    4867 logs.go:123] Gathering logs for storage-provisioner [c39b454144bf] ...
	I0912 15:20:43.509196    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39b454144bf"
	I0912 15:20:43.520471    4867 logs.go:123] Gathering logs for kube-controller-manager [f30094d831c4] ...
	I0912 15:20:43.520482    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30094d831c4"
	I0912 15:20:43.537913    4867 logs.go:123] Gathering logs for kube-controller-manager [560d61b775be] ...
	I0912 15:20:43.537933    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560d61b775be"
	I0912 15:20:43.552457    4867 logs.go:123] Gathering logs for etcd [122be89153d2] ...
	I0912 15:20:43.552470    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 122be89153d2"
	I0912 15:20:43.566614    4867 logs.go:123] Gathering logs for kube-scheduler [a3bda796bcce] ...
	I0912 15:20:43.566627    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bda796bcce"
	I0912 15:20:43.578608    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:20:43.578621    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:20:43.616067    4867 logs.go:123] Gathering logs for kube-apiserver [eb0dc5acb005] ...
	I0912 15:20:43.616078    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0dc5acb005"
	I0912 15:20:43.631258    4867 logs.go:123] Gathering logs for coredns [7cc43947deca] ...
	I0912 15:20:43.631271    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc43947deca"
	I0912 15:20:43.643546    4867 logs.go:123] Gathering logs for kube-scheduler [ae93257a08cb] ...
	I0912 15:20:43.643557    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae93257a08cb"
	I0912 15:20:43.664703    4867 logs.go:123] Gathering logs for kube-proxy [f11c4c968a9a] ...
	I0912 15:20:43.664718    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11c4c968a9a"
	I0912 15:20:43.677045    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:20:43.677056    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:20:43.702067    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:20:43.702077    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:20:43.740074    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:20:43.740171    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:20:43.741558    4867 logs.go:123] Gathering logs for kube-apiserver [bdc9dc70be85] ...
	I0912 15:20:43.741567    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc9dc70be85"
	I0912 15:20:43.780025    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:20:43.780041    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:20:43.791860    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:20:43.791871    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:20:43.796061    4867 logs.go:123] Gathering logs for etcd [d3229f85be9b] ...
	I0912 15:20:43.796069    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3229f85be9b"
	I0912 15:20:43.811565    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:20:43.811578    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:20:43.811612    4867 out.go:270] X Problems detected in kubelet:
	W0912 15:20:43.811618    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:20:43.811622    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:20:43.811626    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:20:43.811642    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:20:48.813581    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:48.813632    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0912 15:20:49.164730    4705 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0912 15:20:49.172879    4705 out.go:177] * Enabled addons: storage-provisioner
	I0912 15:20:49.181884    4705 addons.go:510] duration metric: took 30.467842208s for enable addons: enabled=[storage-provisioner]
	I0912 15:20:53.814232    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:53.814271    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:53.814374    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:58.814850    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:58.814870    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:58.814910    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:58.814978    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:20:58.826522    4867 logs.go:276] 2 containers: [eb0dc5acb005 bdc9dc70be85]
	I0912 15:20:58.826596    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:20:58.837844    4867 logs.go:276] 2 containers: [122be89153d2 d3229f85be9b]
	I0912 15:20:58.837922    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:20:58.848848    4867 logs.go:276] 1 containers: [7cc43947deca]
	I0912 15:20:58.848910    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:20:58.860318    4867 logs.go:276] 2 containers: [a3bda796bcce ae93257a08cb]
	I0912 15:20:58.860403    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:20:58.871023    4867 logs.go:276] 1 containers: [f11c4c968a9a]
	I0912 15:20:58.871089    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:20:58.881536    4867 logs.go:276] 2 containers: [f30094d831c4 560d61b775be]
	I0912 15:20:58.881606    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:20:58.892106    4867 logs.go:276] 0 containers: []
	W0912 15:20:58.892119    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:20:58.892174    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:20:58.902757    4867 logs.go:276] 2 containers: [83b1123d4f7f c39b454144bf]
	I0912 15:20:58.902773    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:20:58.902778    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:20:58.907387    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:20:58.907396    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:20:58.942697    4867 logs.go:123] Gathering logs for etcd [d3229f85be9b] ...
	I0912 15:20:58.942708    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3229f85be9b"
	I0912 15:20:58.957238    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:20:58.957249    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:20:58.993595    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:20:58.993687    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:20:58.995023    4867 logs.go:123] Gathering logs for kube-proxy [f11c4c968a9a] ...
	I0912 15:20:58.995027    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11c4c968a9a"
	I0912 15:20:59.006740    4867 logs.go:123] Gathering logs for storage-provisioner [83b1123d4f7f] ...
	I0912 15:20:59.006749    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b1123d4f7f"
	I0912 15:20:59.018075    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:20:59.018087    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:20:59.030273    4867 logs.go:123] Gathering logs for storage-provisioner [c39b454144bf] ...
	I0912 15:20:59.030287    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39b454144bf"
	I0912 15:20:59.041748    4867 logs.go:123] Gathering logs for kube-apiserver [eb0dc5acb005] ...
	I0912 15:20:59.041760    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0dc5acb005"
	I0912 15:20:59.055907    4867 logs.go:123] Gathering logs for kube-apiserver [bdc9dc70be85] ...
	I0912 15:20:59.055918    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc9dc70be85"
	I0912 15:20:59.094395    4867 logs.go:123] Gathering logs for etcd [122be89153d2] ...
	I0912 15:20:59.094408    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 122be89153d2"
	I0912 15:20:59.108199    4867 logs.go:123] Gathering logs for coredns [7cc43947deca] ...
	I0912 15:20:59.108212    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc43947deca"
	I0912 15:20:59.119366    4867 logs.go:123] Gathering logs for kube-scheduler [ae93257a08cb] ...
	I0912 15:20:59.119381    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae93257a08cb"
	I0912 15:20:59.134155    4867 logs.go:123] Gathering logs for kube-controller-manager [f30094d831c4] ...
	I0912 15:20:59.134165    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30094d831c4"
	I0912 15:20:59.151603    4867 logs.go:123] Gathering logs for kube-controller-manager [560d61b775be] ...
	I0912 15:20:59.151615    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560d61b775be"
	I0912 15:20:59.165894    4867 logs.go:123] Gathering logs for kube-scheduler [a3bda796bcce] ...
	I0912 15:20:59.165904    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bda796bcce"
	I0912 15:20:59.177070    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:20:59.177080    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:20:59.202401    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:20:59.202410    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:20:59.202436    4867 out.go:270] X Problems detected in kubelet:
	W0912 15:20:59.202441    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:20:59.202445    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:20:59.202450    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:20:59.202452    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:21:03.815739    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:21:03.815781    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:21:08.821616    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:21:08.821665    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:21:09.211439    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:21:13.829175    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:21:13.829206    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:21:14.219119    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:21:14.219386    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:21:14.250137    4867 logs.go:276] 2 containers: [eb0dc5acb005 bdc9dc70be85]
	I0912 15:21:14.250273    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:21:14.268223    4867 logs.go:276] 2 containers: [122be89153d2 d3229f85be9b]
	I0912 15:21:14.268333    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:21:14.281878    4867 logs.go:276] 1 containers: [7cc43947deca]
	I0912 15:21:14.281951    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:21:14.293733    4867 logs.go:276] 2 containers: [a3bda796bcce ae93257a08cb]
	I0912 15:21:14.293811    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:21:14.304343    4867 logs.go:276] 1 containers: [f11c4c968a9a]
	I0912 15:21:14.304415    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:21:14.320105    4867 logs.go:276] 2 containers: [f30094d831c4 560d61b775be]
	I0912 15:21:14.320175    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:21:14.330512    4867 logs.go:276] 0 containers: []
	W0912 15:21:14.330523    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:21:14.330585    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:21:14.343560    4867 logs.go:276] 2 containers: [83b1123d4f7f c39b454144bf]
	I0912 15:21:14.343579    4867 logs.go:123] Gathering logs for kube-proxy [f11c4c968a9a] ...
	I0912 15:21:14.343585    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11c4c968a9a"
	I0912 15:21:14.355685    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:21:14.355696    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:21:14.392797    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:21:14.392889    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:21:14.394271    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:21:14.394279    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:21:14.431368    4867 logs.go:123] Gathering logs for etcd [d3229f85be9b] ...
	I0912 15:21:14.431382    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3229f85be9b"
	I0912 15:21:14.446549    4867 logs.go:123] Gathering logs for kube-scheduler [a3bda796bcce] ...
	I0912 15:21:14.446560    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bda796bcce"
	I0912 15:21:14.459525    4867 logs.go:123] Gathering logs for kube-controller-manager [560d61b775be] ...
	I0912 15:21:14.459539    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560d61b775be"
	I0912 15:21:14.474231    4867 logs.go:123] Gathering logs for storage-provisioner [83b1123d4f7f] ...
	I0912 15:21:14.474241    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b1123d4f7f"
	I0912 15:21:14.485924    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:21:14.485934    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:21:14.497975    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:21:14.497987    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:21:14.502241    4867 logs.go:123] Gathering logs for kube-apiserver [bdc9dc70be85] ...
	I0912 15:21:14.502251    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc9dc70be85"
	I0912 15:21:14.539256    4867 logs.go:123] Gathering logs for etcd [122be89153d2] ...
	I0912 15:21:14.539268    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 122be89153d2"
	I0912 15:21:14.553773    4867 logs.go:123] Gathering logs for kube-scheduler [ae93257a08cb] ...
	I0912 15:21:14.553783    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae93257a08cb"
	I0912 15:21:14.569249    4867 logs.go:123] Gathering logs for kube-apiserver [eb0dc5acb005] ...
	I0912 15:21:14.569260    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0dc5acb005"
	I0912 15:21:14.589323    4867 logs.go:123] Gathering logs for coredns [7cc43947deca] ...
	I0912 15:21:14.589334    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc43947deca"
	I0912 15:21:14.600626    4867 logs.go:123] Gathering logs for storage-provisioner [c39b454144bf] ...
	I0912 15:21:14.600638    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39b454144bf"
	I0912 15:21:14.618973    4867 logs.go:123] Gathering logs for kube-controller-manager [f30094d831c4] ...
	I0912 15:21:14.618984    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30094d831c4"
	I0912 15:21:14.636798    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:21:14.636813    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:21:14.662283    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:21:14.662291    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:21:14.662318    4867 out.go:270] X Problems detected in kubelet:
	W0912 15:21:14.662323    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:21:14.662327    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:21:14.662330    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:21:14.662343    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:21:18.835433    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:21:18.835563    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:21:18.853254    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:21:18.853332    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:21:18.871160    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:21:18.871231    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:21:18.883838    4705 logs.go:276] 2 containers: [457bfc5c142a f97fcc59df96]
	I0912 15:21:18.883908    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:21:18.895587    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:21:18.895658    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:21:18.906291    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:21:18.906355    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:21:18.916970    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:21:18.917040    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:21:18.927619    4705 logs.go:276] 0 containers: []
	W0912 15:21:18.927629    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:21:18.927686    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:21:18.938765    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:21:18.938779    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:21:18.938784    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:21:18.953915    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:21:18.953927    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:21:18.966174    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:21:18.966185    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:21:18.980967    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:21:18.980981    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:21:19.003894    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:21:19.003902    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:21:19.036128    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:21:19.036136    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:21:19.040894    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:21:19.040901    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:21:19.076075    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:21:19.076086    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:21:19.090181    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:21:19.090193    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:21:19.102169    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:21:19.102182    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:21:19.113962    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:21:19.113971    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:21:19.131734    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:21:19.131745    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:21:19.143693    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:21:19.143707    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:21:21.659145    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:21:24.673233    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:21:26.663884    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:21:26.663990    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:21:26.675403    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:21:26.675482    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:21:26.686927    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:21:26.687006    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:21:26.697387    4705 logs.go:276] 2 containers: [457bfc5c142a f97fcc59df96]
	I0912 15:21:26.697459    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:21:26.708362    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:21:26.708432    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:21:26.719062    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:21:26.719129    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:21:26.730349    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:21:26.730421    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:21:26.745040    4705 logs.go:276] 0 containers: []
	W0912 15:21:26.745052    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:21:26.745116    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:21:26.755880    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:21:26.755894    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:21:26.755899    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:21:26.789925    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:21:26.789934    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:21:26.794518    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:21:26.794526    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:21:26.829500    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:21:26.829511    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:21:26.845113    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:21:26.845124    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:21:26.866803    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:21:26.866813    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:21:26.891650    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:21:26.891664    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:21:26.906451    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:21:26.906465    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:21:26.920411    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:21:26.920422    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:21:26.936376    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:21:26.936389    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:21:26.948314    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:21:26.948325    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:21:26.960453    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:21:26.960466    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:21:26.971775    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:21:26.971784    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:21:29.488669    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:21:29.677518    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:21:29.677764    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:21:29.696568    4867 logs.go:276] 2 containers: [eb0dc5acb005 bdc9dc70be85]
	I0912 15:21:29.696660    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:21:29.716020    4867 logs.go:276] 2 containers: [122be89153d2 d3229f85be9b]
	I0912 15:21:29.716092    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:21:29.726732    4867 logs.go:276] 1 containers: [7cc43947deca]
	I0912 15:21:29.726805    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:21:29.737998    4867 logs.go:276] 2 containers: [a3bda796bcce ae93257a08cb]
	I0912 15:21:29.738075    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:21:29.748442    4867 logs.go:276] 1 containers: [f11c4c968a9a]
	I0912 15:21:29.748515    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:21:29.758955    4867 logs.go:276] 2 containers: [f30094d831c4 560d61b775be]
	I0912 15:21:29.759028    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:21:29.768994    4867 logs.go:276] 0 containers: []
	W0912 15:21:29.769006    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:21:29.769061    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:21:29.779685    4867 logs.go:276] 2 containers: [83b1123d4f7f c39b454144bf]
	I0912 15:21:29.779706    4867 logs.go:123] Gathering logs for kube-apiserver [eb0dc5acb005] ...
	I0912 15:21:29.779711    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0dc5acb005"
	I0912 15:21:29.794027    4867 logs.go:123] Gathering logs for etcd [122be89153d2] ...
	I0912 15:21:29.794039    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 122be89153d2"
	I0912 15:21:29.808484    4867 logs.go:123] Gathering logs for kube-scheduler [ae93257a08cb] ...
	I0912 15:21:29.808495    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae93257a08cb"
	I0912 15:21:29.824041    4867 logs.go:123] Gathering logs for storage-provisioner [c39b454144bf] ...
	I0912 15:21:29.824052    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39b454144bf"
	I0912 15:21:29.835693    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:21:29.835704    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:21:29.870527    4867 logs.go:123] Gathering logs for kube-scheduler [a3bda796bcce] ...
	I0912 15:21:29.870540    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bda796bcce"
	I0912 15:21:29.882352    4867 logs.go:123] Gathering logs for kube-proxy [f11c4c968a9a] ...
	I0912 15:21:29.882364    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11c4c968a9a"
	I0912 15:21:29.893793    4867 logs.go:123] Gathering logs for kube-controller-manager [560d61b775be] ...
	I0912 15:21:29.893807    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560d61b775be"
	I0912 15:21:29.908353    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:21:29.908365    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:21:29.945334    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:21:29.945428    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:21:29.946806    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:21:29.946811    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:21:29.950824    4867 logs.go:123] Gathering logs for kube-apiserver [bdc9dc70be85] ...
	I0912 15:21:29.950834    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc9dc70be85"
	I0912 15:21:29.988545    4867 logs.go:123] Gathering logs for etcd [d3229f85be9b] ...
	I0912 15:21:29.988555    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3229f85be9b"
	I0912 15:21:30.002977    4867 logs.go:123] Gathering logs for coredns [7cc43947deca] ...
	I0912 15:21:30.002988    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc43947deca"
	I0912 15:21:30.013993    4867 logs.go:123] Gathering logs for storage-provisioner [83b1123d4f7f] ...
	I0912 15:21:30.014005    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b1123d4f7f"
	I0912 15:21:30.025621    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:21:30.025632    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:21:30.051437    4867 logs.go:123] Gathering logs for kube-controller-manager [f30094d831c4] ...
	I0912 15:21:30.051449    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30094d831c4"
	I0912 15:21:30.074972    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:21:30.074985    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:21:30.087162    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:21:30.087171    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:21:30.087202    4867 out.go:270] X Problems detected in kubelet:
	W0912 15:21:30.087206    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:21:30.087209    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:21:30.087213    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:21:30.087216    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:21:34.492987    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:21:34.493834    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:21:34.528882    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:21:34.529011    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:21:34.547537    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:21:34.547624    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:21:34.560173    4705 logs.go:276] 2 containers: [457bfc5c142a f97fcc59df96]
	I0912 15:21:34.560247    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:21:34.570929    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:21:34.570992    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:21:34.581131    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:21:34.581205    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:21:34.591855    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:21:34.591933    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:21:34.601895    4705 logs.go:276] 0 containers: []
	W0912 15:21:34.601910    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:21:34.601969    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:21:34.612196    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:21:34.612210    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:21:34.612215    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:21:34.635230    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:21:34.635238    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:21:34.646337    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:21:34.646347    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:21:34.651273    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:21:34.651280    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:21:34.686262    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:21:34.686276    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:21:34.698227    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:21:34.698239    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:21:34.714004    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:21:34.714018    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:21:34.731831    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:21:34.731841    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:21:34.743079    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:21:34.743092    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:21:34.778177    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:21:34.778186    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:21:34.792492    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:21:34.792504    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:21:34.806370    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:21:34.806380    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:21:34.821367    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:21:34.821377    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:21:37.333354    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:21:40.093785    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:21:42.334556    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:21:42.334720    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:21:42.349120    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:21:42.349199    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:21:42.361094    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:21:42.361165    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:21:42.372340    4705 logs.go:276] 2 containers: [457bfc5c142a f97fcc59df96]
	I0912 15:21:42.372409    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:21:42.382487    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:21:42.382556    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:21:42.392359    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:21:42.392442    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:21:42.402914    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:21:42.402988    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:21:42.413025    4705 logs.go:276] 0 containers: []
	W0912 15:21:42.413035    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:21:42.413091    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:21:42.423141    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:21:42.423155    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:21:42.423160    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:21:42.443129    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:21:42.443140    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:21:42.475887    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:21:42.475895    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:21:42.480509    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:21:42.480515    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:21:42.491945    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:21:42.491954    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:21:42.503545    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:21:42.503556    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:21:42.519049    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:21:42.519063    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:21:42.531084    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:21:42.531098    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:21:42.566159    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:21:42.566171    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:21:42.580487    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:21:42.580496    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:21:42.595229    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:21:42.595242    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:21:42.611601    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:21:42.611612    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:21:42.638051    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:21:42.638063    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:21:45.151732    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:21:45.097219    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:21:45.097405    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:21:45.119018    4867 logs.go:276] 2 containers: [eb0dc5acb005 bdc9dc70be85]
	I0912 15:21:45.119105    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:21:45.132278    4867 logs.go:276] 2 containers: [122be89153d2 d3229f85be9b]
	I0912 15:21:45.132352    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:21:45.144191    4867 logs.go:276] 1 containers: [7cc43947deca]
	I0912 15:21:45.144260    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:21:45.156117    4867 logs.go:276] 2 containers: [a3bda796bcce ae93257a08cb]
	I0912 15:21:45.156179    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:21:45.171040    4867 logs.go:276] 1 containers: [f11c4c968a9a]
	I0912 15:21:45.171113    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:21:45.181608    4867 logs.go:276] 2 containers: [f30094d831c4 560d61b775be]
	I0912 15:21:45.181677    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:21:45.198741    4867 logs.go:276] 0 containers: []
	W0912 15:21:45.198756    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:21:45.198815    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:21:45.209199    4867 logs.go:276] 2 containers: [83b1123d4f7f c39b454144bf]
	I0912 15:21:45.209219    4867 logs.go:123] Gathering logs for kube-scheduler [ae93257a08cb] ...
	I0912 15:21:45.209225    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae93257a08cb"
	I0912 15:21:45.224025    4867 logs.go:123] Gathering logs for kube-proxy [f11c4c968a9a] ...
	I0912 15:21:45.224035    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11c4c968a9a"
	I0912 15:21:45.241677    4867 logs.go:123] Gathering logs for kube-controller-manager [f30094d831c4] ...
	I0912 15:21:45.241688    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30094d831c4"
	I0912 15:21:45.259461    4867 logs.go:123] Gathering logs for storage-provisioner [83b1123d4f7f] ...
	I0912 15:21:45.259469    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b1123d4f7f"
	I0912 15:21:45.271878    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:21:45.271889    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:21:45.283161    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:21:45.283172    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:21:45.318824    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:21:45.318922    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:21:45.320254    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:21:45.320258    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:21:45.356957    4867 logs.go:123] Gathering logs for kube-scheduler [a3bda796bcce] ...
	I0912 15:21:45.356968    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bda796bcce"
	I0912 15:21:45.369680    4867 logs.go:123] Gathering logs for kube-controller-manager [560d61b775be] ...
	I0912 15:21:45.369693    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560d61b775be"
	I0912 15:21:45.384474    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:21:45.384486    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:21:45.410000    4867 logs.go:123] Gathering logs for kube-apiserver [eb0dc5acb005] ...
	I0912 15:21:45.410010    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0dc5acb005"
	I0912 15:21:45.424332    4867 logs.go:123] Gathering logs for etcd [d3229f85be9b] ...
	I0912 15:21:45.424342    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3229f85be9b"
	I0912 15:21:45.443286    4867 logs.go:123] Gathering logs for coredns [7cc43947deca] ...
	I0912 15:21:45.443298    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc43947deca"
	I0912 15:21:45.453970    4867 logs.go:123] Gathering logs for storage-provisioner [c39b454144bf] ...
	I0912 15:21:45.453981    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39b454144bf"
	I0912 15:21:45.465428    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:21:45.465439    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:21:45.470230    4867 logs.go:123] Gathering logs for kube-apiserver [bdc9dc70be85] ...
	I0912 15:21:45.470239    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc9dc70be85"
	I0912 15:21:45.508131    4867 logs.go:123] Gathering logs for etcd [122be89153d2] ...
	I0912 15:21:45.508141    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 122be89153d2"
	I0912 15:21:45.527549    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:21:45.527559    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:21:45.527585    4867 out.go:270] X Problems detected in kubelet:
	W0912 15:21:45.527589    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:21:45.527593    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:21:45.527597    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:21:45.527601    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:21:50.154122    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:21:50.154339    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:21:50.172254    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:21:50.172340    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:21:50.186212    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:21:50.186283    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:21:50.197747    4705 logs.go:276] 2 containers: [457bfc5c142a f97fcc59df96]
	I0912 15:21:50.197812    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:21:50.208398    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:21:50.208459    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:21:50.218924    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:21:50.218988    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:21:50.229045    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:21:50.229105    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:21:50.238964    4705 logs.go:276] 0 containers: []
	W0912 15:21:50.238976    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:21:50.239037    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:21:50.249021    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:21:50.249038    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:21:50.249043    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:21:50.263033    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:21:50.263046    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:21:50.274737    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:21:50.274750    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:21:50.286080    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:21:50.286091    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:21:50.309574    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:21:50.309582    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:21:50.320634    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:21:50.320645    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:21:50.325143    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:21:50.325153    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:21:50.359483    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:21:50.359495    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:21:50.373774    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:21:50.373783    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:21:50.385342    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:21:50.385353    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:21:50.400785    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:21:50.400796    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:21:50.424479    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:21:50.424489    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:21:50.436257    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:21:50.436272    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:21:52.971424    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:21:55.532425    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:21:57.974011    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:21:57.974175    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:21:57.992454    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:21:57.992548    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:21:58.005259    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:21:58.005328    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:21:58.021001    4705 logs.go:276] 2 containers: [457bfc5c142a f97fcc59df96]
	I0912 15:21:58.021072    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:21:58.032242    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:21:58.032308    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:21:58.043113    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:21:58.043187    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:21:58.053975    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:21:58.054048    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:21:58.064120    4705 logs.go:276] 0 containers: []
	W0912 15:21:58.064130    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:21:58.064184    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:21:58.074177    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:21:58.074191    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:21:58.074197    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:21:58.085966    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:21:58.085977    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:21:58.109206    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:21:58.109217    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:21:58.120538    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:21:58.120548    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:21:58.156844    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:21:58.156855    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:21:58.171137    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:21:58.171147    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:21:58.191316    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:21:58.191327    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:21:58.203259    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:21:58.203268    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:21:58.220192    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:21:58.220202    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:21:58.253040    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:21:58.253047    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:21:58.257695    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:21:58.257704    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:21:58.276291    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:21:58.276301    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:21:58.295520    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:21:58.295533    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:22:00.534964    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:22:00.535408    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:22:00.570452    4867 logs.go:276] 2 containers: [eb0dc5acb005 bdc9dc70be85]
	I0912 15:22:00.570585    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:22:00.590553    4867 logs.go:276] 2 containers: [122be89153d2 d3229f85be9b]
	I0912 15:22:00.590647    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:22:00.607852    4867 logs.go:276] 1 containers: [7cc43947deca]
	I0912 15:22:00.607933    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:22:00.619100    4867 logs.go:276] 2 containers: [a3bda796bcce ae93257a08cb]
	I0912 15:22:00.619173    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:22:00.629456    4867 logs.go:276] 1 containers: [f11c4c968a9a]
	I0912 15:22:00.629519    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:22:00.640136    4867 logs.go:276] 2 containers: [f30094d831c4 560d61b775be]
	I0912 15:22:00.640202    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:22:00.651860    4867 logs.go:276] 0 containers: []
	W0912 15:22:00.651873    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:22:00.651936    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:22:00.667086    4867 logs.go:276] 2 containers: [83b1123d4f7f c39b454144bf]
	I0912 15:22:00.667114    4867 logs.go:123] Gathering logs for storage-provisioner [83b1123d4f7f] ...
	I0912 15:22:00.667119    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b1123d4f7f"
	I0912 15:22:00.678721    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:22:00.678732    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:22:00.682964    4867 logs.go:123] Gathering logs for kube-apiserver [eb0dc5acb005] ...
	I0912 15:22:00.682972    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0dc5acb005"
	I0912 15:22:00.698077    4867 logs.go:123] Gathering logs for kube-apiserver [bdc9dc70be85] ...
	I0912 15:22:00.698087    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc9dc70be85"
	I0912 15:22:00.736896    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:22:00.736912    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:22:00.760433    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:22:00.760447    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:22:00.772735    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:22:00.772751    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:22:00.808461    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:22:00.808553    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:22:00.809870    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:22:00.809874    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:22:00.845032    4867 logs.go:123] Gathering logs for coredns [7cc43947deca] ...
	I0912 15:22:00.845043    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc43947deca"
	I0912 15:22:00.855923    4867 logs.go:123] Gathering logs for kube-scheduler [a3bda796bcce] ...
	I0912 15:22:00.855936    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bda796bcce"
	I0912 15:22:00.867782    4867 logs.go:123] Gathering logs for kube-scheduler [ae93257a08cb] ...
	I0912 15:22:00.867793    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae93257a08cb"
	I0912 15:22:00.883412    4867 logs.go:123] Gathering logs for kube-proxy [f11c4c968a9a] ...
	I0912 15:22:00.883422    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11c4c968a9a"
	I0912 15:22:00.895324    4867 logs.go:123] Gathering logs for kube-controller-manager [f30094d831c4] ...
	I0912 15:22:00.895336    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30094d831c4"
	I0912 15:22:00.913310    4867 logs.go:123] Gathering logs for kube-controller-manager [560d61b775be] ...
	I0912 15:22:00.913321    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560d61b775be"
	I0912 15:22:00.927602    4867 logs.go:123] Gathering logs for storage-provisioner [c39b454144bf] ...
	I0912 15:22:00.927615    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39b454144bf"
	I0912 15:22:00.938552    4867 logs.go:123] Gathering logs for etcd [122be89153d2] ...
	I0912 15:22:00.938562    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 122be89153d2"
	I0912 15:22:00.951847    4867 logs.go:123] Gathering logs for etcd [d3229f85be9b] ...
	I0912 15:22:00.951864    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3229f85be9b"
	I0912 15:22:00.966616    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:22:00.966628    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:22:00.966652    4867 out.go:270] X Problems detected in kubelet:
	W0912 15:22:00.966656    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:22:00.966659    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:22:00.966662    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:22:00.966664    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:22:00.809201    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:22:05.811489    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:22:05.811733    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:22:05.830282    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:22:05.830373    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:22:05.843749    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:22:05.843830    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:22:05.855463    4705 logs.go:276] 2 containers: [457bfc5c142a f97fcc59df96]
	I0912 15:22:05.855533    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:22:05.866248    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:22:05.866319    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:22:05.876913    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:22:05.876990    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:22:05.888055    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:22:05.888148    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:22:05.898754    4705 logs.go:276] 0 containers: []
	W0912 15:22:05.898765    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:22:05.898824    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:22:05.909930    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:22:05.909946    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:22:05.909952    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:22:05.945537    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:22:05.945550    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:22:05.950207    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:22:05.950213    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:22:05.994675    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:22:05.994686    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:22:06.010658    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:22:06.010669    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:22:06.024949    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:22:06.024960    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:22:06.041983    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:22:06.041996    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:22:06.064956    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:22:06.064964    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:22:06.075780    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:22:06.075790    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:22:06.095525    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:22:06.095534    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:22:06.106863    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:22:06.106873    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:22:06.139701    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:22:06.139711    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:22:06.151172    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:22:06.151183    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:22:08.665249    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:22:10.969195    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:22:13.667715    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:22:13.668059    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:22:13.715090    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:22:13.715191    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:22:13.732999    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:22:13.733073    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:22:13.746750    4705 logs.go:276] 2 containers: [457bfc5c142a f97fcc59df96]
	I0912 15:22:13.746817    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:22:13.758591    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:22:13.758664    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:22:13.771746    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:22:13.771811    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:22:13.783302    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:22:13.783370    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:22:13.797198    4705 logs.go:276] 0 containers: []
	W0912 15:22:13.797209    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:22:13.797266    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:22:13.807548    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:22:13.807563    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:22:13.807570    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:22:13.841939    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:22:13.841947    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:22:13.876442    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:22:13.876454    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:22:13.891928    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:22:13.891941    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:22:13.917111    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:22:13.917123    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:22:13.929472    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:22:13.929485    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:22:13.941754    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:22:13.941764    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:22:13.946318    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:22:13.946326    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:22:13.960479    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:22:13.960489    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:22:13.974885    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:22:13.974894    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:22:13.986990    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:22:13.987002    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:22:13.998233    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:22:13.998247    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:22:14.010212    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:22:14.010223    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:22:15.971527    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:22:15.971679    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:22:15.984779    4867 logs.go:276] 2 containers: [eb0dc5acb005 bdc9dc70be85]
	I0912 15:22:15.984856    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:22:15.995815    4867 logs.go:276] 2 containers: [122be89153d2 d3229f85be9b]
	I0912 15:22:15.995886    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:22:16.006342    4867 logs.go:276] 1 containers: [7cc43947deca]
	I0912 15:22:16.006419    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:22:16.017402    4867 logs.go:276] 2 containers: [a3bda796bcce ae93257a08cb]
	I0912 15:22:16.017474    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:22:16.028226    4867 logs.go:276] 1 containers: [f11c4c968a9a]
	I0912 15:22:16.028289    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:22:16.038998    4867 logs.go:276] 2 containers: [f30094d831c4 560d61b775be]
	I0912 15:22:16.039068    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:22:16.049583    4867 logs.go:276] 0 containers: []
	W0912 15:22:16.049594    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:22:16.049651    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:22:16.060459    4867 logs.go:276] 2 containers: [83b1123d4f7f c39b454144bf]
	I0912 15:22:16.060478    4867 logs.go:123] Gathering logs for kube-proxy [f11c4c968a9a] ...
	I0912 15:22:16.060484    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11c4c968a9a"
	I0912 15:22:16.072588    4867 logs.go:123] Gathering logs for kube-controller-manager [560d61b775be] ...
	I0912 15:22:16.072598    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560d61b775be"
	I0912 15:22:16.091718    4867 logs.go:123] Gathering logs for storage-provisioner [83b1123d4f7f] ...
	I0912 15:22:16.091727    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b1123d4f7f"
	I0912 15:22:16.102997    4867 logs.go:123] Gathering logs for storage-provisioner [c39b454144bf] ...
	I0912 15:22:16.103009    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39b454144bf"
	I0912 15:22:16.114247    4867 logs.go:123] Gathering logs for kube-apiserver [bdc9dc70be85] ...
	I0912 15:22:16.114258    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc9dc70be85"
	I0912 15:22:16.151860    4867 logs.go:123] Gathering logs for coredns [7cc43947deca] ...
	I0912 15:22:16.151872    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc43947deca"
	I0912 15:22:16.162899    4867 logs.go:123] Gathering logs for kube-scheduler [a3bda796bcce] ...
	I0912 15:22:16.162913    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bda796bcce"
	I0912 15:22:16.178847    4867 logs.go:123] Gathering logs for kube-scheduler [ae93257a08cb] ...
	I0912 15:22:16.178859    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae93257a08cb"
	I0912 15:22:16.197048    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:22:16.197059    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:22:16.233974    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:22:16.234066    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:22:16.235403    4867 logs.go:123] Gathering logs for kube-apiserver [eb0dc5acb005] ...
	I0912 15:22:16.235407    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0dc5acb005"
	I0912 15:22:16.249249    4867 logs.go:123] Gathering logs for etcd [122be89153d2] ...
	I0912 15:22:16.249262    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 122be89153d2"
	I0912 15:22:16.265734    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:22:16.265747    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:22:16.278042    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:22:16.278058    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:22:16.282557    4867 logs.go:123] Gathering logs for etcd [d3229f85be9b] ...
	I0912 15:22:16.282564    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3229f85be9b"
	I0912 15:22:16.297794    4867 logs.go:123] Gathering logs for kube-controller-manager [f30094d831c4] ...
	I0912 15:22:16.297808    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30094d831c4"
	I0912 15:22:16.315293    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:22:16.315307    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:22:16.350035    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:22:16.350050    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:22:16.372931    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:22:16.372939    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:22:16.372965    4867 out.go:270] X Problems detected in kubelet:
	W0912 15:22:16.372969    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:22:16.372973    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:22:16.372976    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:22:16.372979    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:22:16.529958    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:22:21.532138    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:22:21.532306    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:22:21.547684    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:22:21.547760    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:22:21.560236    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:22:21.560306    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:22:21.571125    4705 logs.go:276] 2 containers: [457bfc5c142a f97fcc59df96]
	I0912 15:22:21.571189    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:22:21.581472    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:22:21.581532    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:22:21.594927    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:22:21.594990    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:22:21.606063    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:22:21.606122    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:22:21.616848    4705 logs.go:276] 0 containers: []
	W0912 15:22:21.616858    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:22:21.616916    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:22:21.628124    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:22:21.628140    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:22:21.628146    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:22:21.648628    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:22:21.648642    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:22:21.661384    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:22:21.661395    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:22:21.679177    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:22:21.679191    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:22:21.691326    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:22:21.691339    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:22:21.715887    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:22:21.715897    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:22:21.727322    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:22:21.727335    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:22:21.739076    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:22:21.739088    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:22:21.771904    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:22:21.771914    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:22:21.776157    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:22:21.776167    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:22:21.811558    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:22:21.811570    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:22:21.825463    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:22:21.825476    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:22:21.843130    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:22:21.843142    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:22:24.356462    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:22:26.376466    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:22:29.358789    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:22:29.359055    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:22:29.382559    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:22:29.382664    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:22:29.398672    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:22:29.398750    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:22:29.415428    4705 logs.go:276] 2 containers: [457bfc5c142a f97fcc59df96]
	I0912 15:22:29.415508    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:22:29.425947    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:22:29.426019    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:22:29.437216    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:22:29.437280    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:22:29.448241    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:22:29.448302    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:22:29.459110    4705 logs.go:276] 0 containers: []
	W0912 15:22:29.459121    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:22:29.459186    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:22:29.473322    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:22:29.473337    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:22:29.473342    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:22:29.485181    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:22:29.485194    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:22:29.499724    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:22:29.499735    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:22:29.511049    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:22:29.511062    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:22:29.522635    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:22:29.522648    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:22:29.546981    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:22:29.546988    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:22:29.558551    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:22:29.558561    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:22:29.592739    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:22:29.592747    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:22:29.609251    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:22:29.609261    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:22:29.628983    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:22:29.628993    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:22:29.641027    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:22:29.641039    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:22:29.658547    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:22:29.658556    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:22:29.663004    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:22:29.663014    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:22:31.378649    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:22:31.378752    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:22:31.389713    4867 logs.go:276] 2 containers: [eb0dc5acb005 bdc9dc70be85]
	I0912 15:22:31.389791    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:22:32.198656    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:22:31.400838    4867 logs.go:276] 2 containers: [122be89153d2 d3229f85be9b]
	I0912 15:22:31.400910    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:22:31.410796    4867 logs.go:276] 1 containers: [7cc43947deca]
	I0912 15:22:31.410867    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:22:31.421529    4867 logs.go:276] 2 containers: [a3bda796bcce ae93257a08cb]
	I0912 15:22:31.421597    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:22:31.432313    4867 logs.go:276] 1 containers: [f11c4c968a9a]
	I0912 15:22:31.432376    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:22:31.442838    4867 logs.go:276] 2 containers: [f30094d831c4 560d61b775be]
	I0912 15:22:31.442907    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:22:31.452350    4867 logs.go:276] 0 containers: []
	W0912 15:22:31.452360    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:22:31.452424    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:22:31.463068    4867 logs.go:276] 2 containers: [83b1123d4f7f c39b454144bf]
	I0912 15:22:31.463086    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:22:31.463092    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:22:31.499659    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:22:31.499758    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:22:31.501055    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:22:31.501061    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:22:31.505948    4867 logs.go:123] Gathering logs for kube-apiserver [bdc9dc70be85] ...
	I0912 15:22:31.505956    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc9dc70be85"
	I0912 15:22:31.546958    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:22:31.546969    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:22:31.559077    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:22:31.559089    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:22:31.596977    4867 logs.go:123] Gathering logs for kube-apiserver [eb0dc5acb005] ...
	I0912 15:22:31.596987    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0dc5acb005"
	I0912 15:22:31.611202    4867 logs.go:123] Gathering logs for kube-proxy [f11c4c968a9a] ...
	I0912 15:22:31.611214    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11c4c968a9a"
	I0912 15:22:31.625867    4867 logs.go:123] Gathering logs for kube-controller-manager [f30094d831c4] ...
	I0912 15:22:31.625878    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30094d831c4"
	I0912 15:22:31.642959    4867 logs.go:123] Gathering logs for kube-controller-manager [560d61b775be] ...
	I0912 15:22:31.642971    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560d61b775be"
	I0912 15:22:31.656953    4867 logs.go:123] Gathering logs for storage-provisioner [c39b454144bf] ...
	I0912 15:22:31.656963    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39b454144bf"
	I0912 15:22:31.677450    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:22:31.677461    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:22:31.700430    4867 logs.go:123] Gathering logs for etcd [d3229f85be9b] ...
	I0912 15:22:31.700438    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3229f85be9b"
	I0912 15:22:31.717762    4867 logs.go:123] Gathering logs for coredns [7cc43947deca] ...
	I0912 15:22:31.717777    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc43947deca"
	I0912 15:22:31.729452    4867 logs.go:123] Gathering logs for storage-provisioner [83b1123d4f7f] ...
	I0912 15:22:31.729464    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b1123d4f7f"
	I0912 15:22:31.740477    4867 logs.go:123] Gathering logs for etcd [122be89153d2] ...
	I0912 15:22:31.740485    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 122be89153d2"
	I0912 15:22:31.758289    4867 logs.go:123] Gathering logs for kube-scheduler [a3bda796bcce] ...
	I0912 15:22:31.758301    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bda796bcce"
	I0912 15:22:31.770081    4867 logs.go:123] Gathering logs for kube-scheduler [ae93257a08cb] ...
	I0912 15:22:31.770091    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae93257a08cb"
	I0912 15:22:31.785290    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:22:31.785299    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:22:31.785324    4867 out.go:270] X Problems detected in kubelet:
	W0912 15:22:31.785328    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:22:31.785331    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:22:31.785335    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:22:31.785338    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:22:37.200896    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:22:37.201159    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:22:37.227827    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:22:37.227944    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:22:37.244190    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:22:37.244272    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:22:37.257360    4705 logs.go:276] 4 containers: [a24810cf4146 4a385dff204c 457bfc5c142a f97fcc59df96]
	I0912 15:22:37.257442    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:22:37.269103    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:22:37.269171    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:22:37.279890    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:22:37.279960    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:22:37.290504    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:22:37.290568    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:22:37.301094    4705 logs.go:276] 0 containers: []
	W0912 15:22:37.301110    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:22:37.301168    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:22:37.311292    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:22:37.311312    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:22:37.311317    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:22:37.347043    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:22:37.347053    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:22:37.363088    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:22:37.363105    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:22:37.374845    4705 logs.go:123] Gathering logs for coredns [4a385dff204c] ...
	I0912 15:22:37.374860    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a385dff204c"
	I0912 15:22:37.386130    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:22:37.386139    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:22:37.397932    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:22:37.397942    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:22:37.412642    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:22:37.412659    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:22:37.424481    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:22:37.424492    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:22:37.449455    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:22:37.449465    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:22:37.453718    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:22:37.453725    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:22:37.472373    4705 logs.go:123] Gathering logs for coredns [a24810cf4146] ...
	I0912 15:22:37.472384    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24810cf4146"
	I0912 15:22:37.483974    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:22:37.483986    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:22:37.495715    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:22:37.495725    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:22:37.528528    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:22:37.528537    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:22:37.540751    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:22:37.540761    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:22:40.060382    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:22:45.062553    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:22:45.062759    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:22:45.081272    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:22:45.081369    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:22:45.095633    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:22:45.095705    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:22:45.107578    4705 logs.go:276] 4 containers: [a24810cf4146 4a385dff204c 457bfc5c142a f97fcc59df96]
	I0912 15:22:45.107654    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:22:45.122090    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:22:45.122159    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:22:45.133022    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:22:45.133093    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:22:45.143488    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:22:45.143553    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:22:45.153562    4705 logs.go:276] 0 containers: []
	W0912 15:22:45.153576    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:22:45.153636    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:22:45.164064    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:22:45.164081    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:22:45.164087    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:22:41.789358    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:22:45.199478    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:22:45.199490    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:22:45.237029    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:22:45.237040    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:22:45.254714    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:22:45.254728    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:22:45.259976    4705 logs.go:123] Gathering logs for coredns [a24810cf4146] ...
	I0912 15:22:45.259984    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24810cf4146"
	I0912 15:22:45.271031    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:22:45.271041    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:22:45.289359    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:22:45.289371    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:22:45.301043    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:22:45.301054    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:22:45.325375    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:22:45.325385    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:22:45.337003    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:22:45.337014    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:22:45.348629    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:22:45.348640    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:22:45.361215    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:22:45.361225    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:22:45.380574    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:22:45.380584    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:22:45.394558    4705 logs.go:123] Gathering logs for coredns [4a385dff204c] ...
	I0912 15:22:45.394567    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a385dff204c"
	I0912 15:22:45.416009    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:22:45.416018    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:22:47.931914    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:22:46.791597    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:22:46.791752    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:22:46.804120    4867 logs.go:276] 2 containers: [eb0dc5acb005 bdc9dc70be85]
	I0912 15:22:46.804197    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:22:46.815069    4867 logs.go:276] 2 containers: [122be89153d2 d3229f85be9b]
	I0912 15:22:46.815133    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:22:46.831113    4867 logs.go:276] 1 containers: [7cc43947deca]
	I0912 15:22:46.831181    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:22:46.841471    4867 logs.go:276] 2 containers: [a3bda796bcce ae93257a08cb]
	I0912 15:22:46.841549    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:22:46.851727    4867 logs.go:276] 1 containers: [f11c4c968a9a]
	I0912 15:22:46.851795    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:22:46.862028    4867 logs.go:276] 2 containers: [f30094d831c4 560d61b775be]
	I0912 15:22:46.862108    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:22:46.872983    4867 logs.go:276] 0 containers: []
	W0912 15:22:46.872999    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:22:46.873062    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:22:46.883832    4867 logs.go:276] 2 containers: [83b1123d4f7f c39b454144bf]
	I0912 15:22:46.883854    4867 logs.go:123] Gathering logs for kube-apiserver [eb0dc5acb005] ...
	I0912 15:22:46.883859    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0dc5acb005"
	I0912 15:22:46.897592    4867 logs.go:123] Gathering logs for kube-apiserver [bdc9dc70be85] ...
	I0912 15:22:46.897601    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc9dc70be85"
	I0912 15:22:46.935791    4867 logs.go:123] Gathering logs for coredns [7cc43947deca] ...
	I0912 15:22:46.935804    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc43947deca"
	I0912 15:22:46.946869    4867 logs.go:123] Gathering logs for storage-provisioner [c39b454144bf] ...
	I0912 15:22:46.946881    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39b454144bf"
	I0912 15:22:46.958175    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:22:46.958188    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:22:46.981325    4867 logs.go:123] Gathering logs for etcd [122be89153d2] ...
	I0912 15:22:46.981335    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 122be89153d2"
	I0912 15:22:46.995634    4867 logs.go:123] Gathering logs for kube-controller-manager [560d61b775be] ...
	I0912 15:22:46.995650    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560d61b775be"
	I0912 15:22:47.010819    4867 logs.go:123] Gathering logs for storage-provisioner [83b1123d4f7f] ...
	I0912 15:22:47.010835    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b1123d4f7f"
	I0912 15:22:47.022642    4867 logs.go:123] Gathering logs for kube-scheduler [ae93257a08cb] ...
	I0912 15:22:47.022654    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae93257a08cb"
	I0912 15:22:47.038394    4867 logs.go:123] Gathering logs for kube-proxy [f11c4c968a9a] ...
	I0912 15:22:47.038407    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11c4c968a9a"
	I0912 15:22:47.050302    4867 logs.go:123] Gathering logs for kube-controller-manager [f30094d831c4] ...
	I0912 15:22:47.050314    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30094d831c4"
	I0912 15:22:47.070138    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:22:47.070150    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:22:47.105237    4867 logs.go:123] Gathering logs for etcd [d3229f85be9b] ...
	I0912 15:22:47.105252    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3229f85be9b"
	I0912 15:22:47.120628    4867 logs.go:123] Gathering logs for kube-scheduler [a3bda796bcce] ...
	I0912 15:22:47.120640    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bda796bcce"
	I0912 15:22:47.140072    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:22:47.140087    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:22:47.178999    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:22:47.179095    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:22:47.180485    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:22:47.180495    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:22:47.185293    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:22:47.185303    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:22:47.196957    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:22:47.196967    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:22:47.196992    4867 out.go:270] X Problems detected in kubelet:
	W0912 15:22:47.196997    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:22:47.197000    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:22:47.197003    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:22:47.197006    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:22:52.934229    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:22:52.934414    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:22:52.948678    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:22:52.948758    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:22:52.960363    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:22:52.960434    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:22:52.971234    4705 logs.go:276] 4 containers: [a24810cf4146 4a385dff204c 457bfc5c142a f97fcc59df96]
	I0912 15:22:52.971302    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:22:52.981521    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:22:52.981582    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:22:52.991993    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:22:52.992055    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:22:53.002554    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:22:53.002617    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:22:53.013076    4705 logs.go:276] 0 containers: []
	W0912 15:22:53.013086    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:22:53.013138    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:22:53.023761    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:22:53.023779    4705 logs.go:123] Gathering logs for coredns [a24810cf4146] ...
	I0912 15:22:53.023784    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24810cf4146"
	I0912 15:22:53.035872    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:22:53.035883    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:22:53.047327    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:22:53.047337    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:22:53.072350    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:22:53.072358    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:22:53.106229    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:22:53.106238    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:22:53.142337    4705 logs.go:123] Gathering logs for coredns [4a385dff204c] ...
	I0912 15:22:53.142348    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a385dff204c"
	I0912 15:22:53.153947    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:22:53.153957    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:22:53.158695    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:22:53.158701    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:22:53.172989    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:22:53.173000    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:22:53.187168    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:22:53.187178    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:22:53.199082    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:22:53.199093    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:22:53.213775    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:22:53.213788    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:22:53.225881    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:22:53.225894    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:22:53.238136    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:22:53.238147    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:22:53.249660    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:22:53.249669    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:22:55.769479    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:22:57.200941    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:23:00.771754    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:23:00.771959    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:23:00.786038    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:23:00.786119    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:23:00.798770    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:23:00.798838    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:23:00.809782    4705 logs.go:276] 4 containers: [a24810cf4146 4a385dff204c 457bfc5c142a f97fcc59df96]
	I0912 15:23:00.809852    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:23:00.820298    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:23:00.820360    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:23:00.830987    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:23:00.831056    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:23:00.841553    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:23:00.841616    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:23:00.852580    4705 logs.go:276] 0 containers: []
	W0912 15:23:00.852590    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:23:00.852645    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:23:00.863287    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:23:00.863304    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:23:00.863310    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:23:00.895681    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:23:00.895692    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:23:00.910266    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:23:00.910276    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:23:00.923945    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:23:00.923956    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:23:00.948649    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:23:00.948656    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:23:00.959957    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:23:00.959968    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:23:00.964495    4705 logs.go:123] Gathering logs for coredns [4a385dff204c] ...
	I0912 15:23:00.964502    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a385dff204c"
	I0912 15:23:00.976185    4705 logs.go:123] Gathering logs for coredns [a24810cf4146] ...
	I0912 15:23:00.976194    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24810cf4146"
	I0912 15:23:00.987848    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:23:00.987879    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:23:01.005379    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:23:01.005391    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:23:01.019531    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:23:01.019545    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:23:01.053306    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:23:01.053318    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:23:01.065307    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:23:01.065320    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:23:01.076982    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:23:01.076995    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:23:01.094071    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:23:01.094083    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:23:03.611759    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:23:02.201455    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:23:02.201556    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:23:02.212645    4867 logs.go:276] 2 containers: [eb0dc5acb005 bdc9dc70be85]
	I0912 15:23:02.212709    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:23:02.224146    4867 logs.go:276] 2 containers: [122be89153d2 d3229f85be9b]
	I0912 15:23:02.224221    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:23:02.237390    4867 logs.go:276] 1 containers: [7cc43947deca]
	I0912 15:23:02.237458    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:23:02.248074    4867 logs.go:276] 2 containers: [a3bda796bcce ae93257a08cb]
	I0912 15:23:02.248148    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:23:02.258342    4867 logs.go:276] 1 containers: [f11c4c968a9a]
	I0912 15:23:02.258415    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:23:02.273528    4867 logs.go:276] 2 containers: [f30094d831c4 560d61b775be]
	I0912 15:23:02.273598    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:23:02.283701    4867 logs.go:276] 0 containers: []
	W0912 15:23:02.283713    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:23:02.283772    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:23:02.294749    4867 logs.go:276] 2 containers: [83b1123d4f7f c39b454144bf]
	I0912 15:23:02.294766    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:23:02.294773    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:23:02.335225    4867 logs.go:123] Gathering logs for etcd [d3229f85be9b] ...
	I0912 15:23:02.335238    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3229f85be9b"
	I0912 15:23:02.352625    4867 logs.go:123] Gathering logs for kube-proxy [f11c4c968a9a] ...
	I0912 15:23:02.352636    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11c4c968a9a"
	I0912 15:23:02.364353    4867 logs.go:123] Gathering logs for storage-provisioner [c39b454144bf] ...
	I0912 15:23:02.364366    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39b454144bf"
	I0912 15:23:02.375833    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:23:02.375846    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:23:02.413255    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:23:02.413348    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:23:02.414689    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:23:02.414695    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:23:02.427498    4867 logs.go:123] Gathering logs for kube-apiserver [eb0dc5acb005] ...
	I0912 15:23:02.427510    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0dc5acb005"
	I0912 15:23:02.441780    4867 logs.go:123] Gathering logs for etcd [122be89153d2] ...
	I0912 15:23:02.441794    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 122be89153d2"
	I0912 15:23:02.456229    4867 logs.go:123] Gathering logs for kube-scheduler [a3bda796bcce] ...
	I0912 15:23:02.456242    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bda796bcce"
	I0912 15:23:02.471552    4867 logs.go:123] Gathering logs for storage-provisioner [83b1123d4f7f] ...
	I0912 15:23:02.471564    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b1123d4f7f"
	I0912 15:23:02.485669    4867 logs.go:123] Gathering logs for kube-controller-manager [560d61b775be] ...
	I0912 15:23:02.485684    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560d61b775be"
	I0912 15:23:02.499316    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:23:02.499331    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:23:02.522923    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:23:02.522931    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:23:02.527387    4867 logs.go:123] Gathering logs for kube-apiserver [bdc9dc70be85] ...
	I0912 15:23:02.527396    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc9dc70be85"
	I0912 15:23:02.567555    4867 logs.go:123] Gathering logs for coredns [7cc43947deca] ...
	I0912 15:23:02.567566    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc43947deca"
	I0912 15:23:02.583433    4867 logs.go:123] Gathering logs for kube-scheduler [ae93257a08cb] ...
	I0912 15:23:02.583445    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae93257a08cb"
	I0912 15:23:02.603163    4867 logs.go:123] Gathering logs for kube-controller-manager [f30094d831c4] ...
	I0912 15:23:02.603177    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30094d831c4"
	I0912 15:23:02.620934    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:23:02.620946    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:23:02.620975    4867 out.go:270] X Problems detected in kubelet:
	W0912 15:23:02.620979    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:23:02.620983    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:23:02.620987    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:23:02.620989    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:23:08.614046    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:23:08.614202    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:23:08.631960    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:23:08.632046    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:23:08.645354    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:23:08.645423    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:23:08.657459    4705 logs.go:276] 4 containers: [a24810cf4146 4a385dff204c 457bfc5c142a f97fcc59df96]
	I0912 15:23:08.657526    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:23:08.667896    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:23:08.667960    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:23:08.678249    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:23:08.678312    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:23:08.688885    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:23:08.688949    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:23:08.702725    4705 logs.go:276] 0 containers: []
	W0912 15:23:08.702737    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:23:08.702792    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:23:08.713370    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:23:08.713387    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:23:08.713393    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:23:08.727302    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:23:08.727316    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:23:08.739091    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:23:08.739104    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:23:08.757006    4705 logs.go:123] Gathering logs for coredns [a24810cf4146] ...
	I0912 15:23:08.757017    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24810cf4146"
	I0912 15:23:08.769049    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:23:08.769063    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:23:08.780700    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:23:08.780711    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:23:08.786602    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:23:08.786613    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:23:08.821706    4705 logs.go:123] Gathering logs for coredns [4a385dff204c] ...
	I0912 15:23:08.821720    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a385dff204c"
	I0912 15:23:08.833395    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:23:08.833406    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:23:08.845493    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:23:08.845508    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:23:08.870633    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:23:08.870643    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:23:08.903881    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:23:08.903888    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:23:08.916178    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:23:08.916187    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:23:08.932335    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:23:08.932348    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:23:08.949464    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:23:08.949474    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:23:11.463277    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:23:12.624137    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:23:16.465508    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:23:16.465683    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:23:16.486364    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:23:16.486433    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:23:16.507352    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:23:16.507436    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:23:16.522254    4705 logs.go:276] 4 containers: [a24810cf4146 4a385dff204c 457bfc5c142a f97fcc59df96]
	I0912 15:23:16.522323    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:23:16.539601    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:23:16.539673    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:23:16.550380    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:23:16.550444    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:23:16.560868    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:23:16.560934    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:23:16.570973    4705 logs.go:276] 0 containers: []
	W0912 15:23:16.570984    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:23:16.571042    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:23:16.581846    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:23:16.581864    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:23:16.581869    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:23:16.604091    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:23:16.604104    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:23:16.615816    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:23:16.615829    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:23:16.620239    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:23:16.620250    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:23:16.634071    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:23:16.634084    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:23:16.645370    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:23:16.645383    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:23:16.670619    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:23:16.670630    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:23:16.705551    4705 logs.go:123] Gathering logs for coredns [a24810cf4146] ...
	I0912 15:23:16.705561    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24810cf4146"
	I0912 15:23:16.718011    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:23:16.718021    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:23:16.729301    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:23:16.729313    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:23:16.741137    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:23:16.741147    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:23:16.775802    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:23:16.775814    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:23:16.792016    4705 logs.go:123] Gathering logs for coredns [4a385dff204c] ...
	I0912 15:23:16.792028    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a385dff204c"
	I0912 15:23:16.804622    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:23:16.804638    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:23:16.816412    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:23:16.816421    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:23:19.334425    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:23:17.626445    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:23:17.626766    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:23:17.661809    4867 logs.go:276] 2 containers: [eb0dc5acb005 bdc9dc70be85]
	I0912 15:23:17.661941    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:23:17.690228    4867 logs.go:276] 2 containers: [122be89153d2 d3229f85be9b]
	I0912 15:23:17.690318    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:23:17.703803    4867 logs.go:276] 1 containers: [7cc43947deca]
	I0912 15:23:17.703879    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:23:17.715111    4867 logs.go:276] 2 containers: [a3bda796bcce ae93257a08cb]
	I0912 15:23:17.715184    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:23:17.730193    4867 logs.go:276] 1 containers: [f11c4c968a9a]
	I0912 15:23:17.730260    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:23:17.740910    4867 logs.go:276] 2 containers: [f30094d831c4 560d61b775be]
	I0912 15:23:17.740982    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:23:17.752103    4867 logs.go:276] 0 containers: []
	W0912 15:23:17.752116    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:23:17.752176    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:23:17.763666    4867 logs.go:276] 2 containers: [83b1123d4f7f c39b454144bf]
	I0912 15:23:17.763686    4867 logs.go:123] Gathering logs for kube-apiserver [eb0dc5acb005] ...
	I0912 15:23:17.763691    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0dc5acb005"
	I0912 15:23:17.778532    4867 logs.go:123] Gathering logs for coredns [7cc43947deca] ...
	I0912 15:23:17.778543    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc43947deca"
	I0912 15:23:17.789985    4867 logs.go:123] Gathering logs for kube-proxy [f11c4c968a9a] ...
	I0912 15:23:17.789996    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11c4c968a9a"
	I0912 15:23:17.802196    4867 logs.go:123] Gathering logs for kube-controller-manager [560d61b775be] ...
	I0912 15:23:17.802207    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560d61b775be"
	I0912 15:23:17.817806    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:23:17.817814    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:23:17.839967    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:23:17.839976    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:23:17.874831    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:23:17.874923    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:23:17.876272    4867 logs.go:123] Gathering logs for kube-apiserver [bdc9dc70be85] ...
	I0912 15:23:17.876279    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc9dc70be85"
	I0912 15:23:17.913850    4867 logs.go:123] Gathering logs for kube-scheduler [a3bda796bcce] ...
	I0912 15:23:17.913860    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bda796bcce"
	I0912 15:23:17.926335    4867 logs.go:123] Gathering logs for kube-controller-manager [f30094d831c4] ...
	I0912 15:23:17.926345    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30094d831c4"
	I0912 15:23:17.943728    4867 logs.go:123] Gathering logs for storage-provisioner [c39b454144bf] ...
	I0912 15:23:17.943738    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39b454144bf"
	I0912 15:23:17.954536    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:23:17.954547    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:23:17.967827    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:23:17.967838    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:23:17.972445    4867 logs.go:123] Gathering logs for etcd [122be89153d2] ...
	I0912 15:23:17.972452    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 122be89153d2"
	I0912 15:23:17.986405    4867 logs.go:123] Gathering logs for kube-scheduler [ae93257a08cb] ...
	I0912 15:23:17.986414    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae93257a08cb"
	I0912 15:23:18.008526    4867 logs.go:123] Gathering logs for storage-provisioner [83b1123d4f7f] ...
	I0912 15:23:18.008537    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b1123d4f7f"
	I0912 15:23:18.020308    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:23:18.020319    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:23:18.054563    4867 logs.go:123] Gathering logs for etcd [d3229f85be9b] ...
	I0912 15:23:18.054573    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3229f85be9b"
	I0912 15:23:18.070114    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:23:18.070123    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:23:18.070152    4867 out.go:270] X Problems detected in kubelet:
	W0912 15:23:18.070156    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:23:18.070159    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:23:18.070163    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:23:18.070166    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:23:24.335068    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:23:24.335458    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:23:24.375819    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:23:24.375931    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:23:24.392080    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:23:24.392158    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:23:24.405995    4705 logs.go:276] 4 containers: [a24810cf4146 4a385dff204c 457bfc5c142a f97fcc59df96]
	I0912 15:23:24.406070    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:23:24.417680    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:23:24.417745    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:23:24.433601    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:23:24.433669    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:23:24.444277    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:23:24.444338    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:23:24.455023    4705 logs.go:276] 0 containers: []
	W0912 15:23:24.455037    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:23:24.455092    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:23:24.465314    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:23:24.465333    4705 logs.go:123] Gathering logs for coredns [4a385dff204c] ...
	I0912 15:23:24.465338    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a385dff204c"
	I0912 15:23:24.477991    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:23:24.478001    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:23:24.489965    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:23:24.489981    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:23:24.525913    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:23:24.525929    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:23:24.530378    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:23:24.530386    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:23:24.550675    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:23:24.550685    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:23:24.565368    4705 logs.go:123] Gathering logs for coredns [a24810cf4146] ...
	I0912 15:23:24.565378    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24810cf4146"
	I0912 15:23:24.577628    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:23:24.577639    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:23:24.589374    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:23:24.589386    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:23:24.625127    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:23:24.625140    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:23:24.644427    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:23:24.644440    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:23:24.656488    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:23:24.656500    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:23:24.681640    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:23:24.681648    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:23:24.693838    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:23:24.693848    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:23:24.711649    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:23:24.711665    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:23:27.224687    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:23:28.074164    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:23:33.076587    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:23:33.076714    4867 kubeadm.go:597] duration metric: took 4m6.942738333s to restartPrimaryControlPlane
	W0912 15:23:33.076783    4867 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0912 15:23:33.076814    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0912 15:23:34.079651    4867 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.002847958s)
	I0912 15:23:34.080009    4867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 15:23:34.085079    4867 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 15:23:34.088185    4867 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 15:23:34.090744    4867 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 15:23:34.090750    4867 kubeadm.go:157] found existing configuration files:
	
	I0912 15:23:34.090771    4867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/admin.conf
	I0912 15:23:34.093207    4867 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 15:23:34.093230    4867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 15:23:34.096490    4867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/kubelet.conf
	I0912 15:23:34.099280    4867 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 15:23:34.099306    4867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 15:23:34.101862    4867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/controller-manager.conf
	I0912 15:23:34.104891    4867 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 15:23:34.104909    4867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 15:23:34.107819    4867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/scheduler.conf
	I0912 15:23:34.110253    4867 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 15:23:34.110272    4867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 15:23:34.113188    4867 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 15:23:34.130789    4867 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0912 15:23:34.130834    4867 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 15:23:34.178015    4867 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 15:23:34.178069    4867 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 15:23:34.178132    4867 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 15:23:34.232876    4867 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 15:23:34.237086    4867 out.go:235]   - Generating certificates and keys ...
	I0912 15:23:34.237125    4867 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 15:23:34.237164    4867 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 15:23:34.237206    4867 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0912 15:23:34.237241    4867 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0912 15:23:34.237272    4867 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0912 15:23:34.237299    4867 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0912 15:23:34.237334    4867 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0912 15:23:34.237365    4867 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0912 15:23:34.237399    4867 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0912 15:23:34.237438    4867 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0912 15:23:34.237464    4867 kubeadm.go:310] [certs] Using the existing "sa" key
	I0912 15:23:34.237499    4867 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 15:23:34.526335    4867 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 15:23:34.566242    4867 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 15:23:34.633489    4867 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 15:23:34.727055    4867 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 15:23:34.759283    4867 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 15:23:34.759654    4867 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 15:23:34.759678    4867 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 15:23:34.826447    4867 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 15:23:32.226964    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:23:32.227132    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:23:32.239407    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:23:32.239487    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:23:32.249980    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:23:32.250050    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:23:32.261843    4705 logs.go:276] 4 containers: [a24810cf4146 4a385dff204c 457bfc5c142a f97fcc59df96]
	I0912 15:23:32.261918    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:23:32.272897    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:23:32.272960    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:23:32.283547    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:23:32.283617    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:23:32.294068    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:23:32.294137    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:23:32.315689    4705 logs.go:276] 0 containers: []
	W0912 15:23:32.315702    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:23:32.315756    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:23:32.326523    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:23:32.326541    4705 logs.go:123] Gathering logs for coredns [4a385dff204c] ...
	I0912 15:23:32.326547    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a385dff204c"
	I0912 15:23:32.337855    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:23:32.337865    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:23:32.349638    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:23:32.349649    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:23:32.388860    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:23:32.388871    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:23:32.406693    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:23:32.406703    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:23:32.419123    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:23:32.419133    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:23:32.434081    4705 logs.go:123] Gathering logs for coredns [a24810cf4146] ...
	I0912 15:23:32.434095    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24810cf4146"
	I0912 15:23:32.446515    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:23:32.446526    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:23:32.467031    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:23:32.467042    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:23:32.478940    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:23:32.478950    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:23:32.503457    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:23:32.503465    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:23:32.507546    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:23:32.507555    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:23:32.519390    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:23:32.519399    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:23:32.539988    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:23:32.540000    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:23:32.572918    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:23:32.572931    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:23:35.091569    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:23:34.830439    4867 out.go:235]   - Booting up control plane ...
	I0912 15:23:34.830490    4867 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 15:23:34.830528    4867 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 15:23:34.830576    4867 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 15:23:34.830762    4867 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 15:23:34.831603    4867 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 15:23:40.092894    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:23:40.093124    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:23:40.109867    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:23:40.109952    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:23:40.125546    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:23:40.125628    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:23:40.137208    4705 logs.go:276] 4 containers: [a24810cf4146 4a385dff204c 457bfc5c142a f97fcc59df96]
	I0912 15:23:40.137284    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:23:40.147580    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:23:40.147651    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:23:40.158056    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:23:40.158124    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:23:40.170537    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:23:40.170605    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:23:40.180869    4705 logs.go:276] 0 containers: []
	W0912 15:23:40.180880    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:23:40.180935    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:23:40.190787    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:23:40.190803    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:23:40.190808    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:23:39.833324    4867 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.001087 seconds
	I0912 15:23:39.833419    4867 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 15:23:39.836920    4867 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 15:23:40.344863    4867 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 15:23:40.345037    4867 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-841000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0912 15:23:40.854156    4867 kubeadm.go:310] [bootstrap-token] Using token: 9batvv.i8tnvzhsrc8b6qr7
	I0912 15:23:40.858295    4867 out.go:235]   - Configuring RBAC rules ...
	I0912 15:23:40.858359    4867 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 15:23:40.866287    4867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 15:23:40.868459    4867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 15:23:40.869462    4867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 15:23:40.870346    4867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 15:23:40.871182    4867 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 15:23:40.874672    4867 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 15:23:41.053029    4867 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0912 15:23:41.268398    4867 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0912 15:23:41.268969    4867 kubeadm.go:310] 
	I0912 15:23:41.269000    4867 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0912 15:23:41.269006    4867 kubeadm.go:310] 
	I0912 15:23:41.269038    4867 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0912 15:23:41.269044    4867 kubeadm.go:310] 
	I0912 15:23:41.269060    4867 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0912 15:23:41.269091    4867 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 15:23:41.269116    4867 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 15:23:41.269134    4867 kubeadm.go:310] 
	I0912 15:23:41.269157    4867 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0912 15:23:41.269178    4867 kubeadm.go:310] 
	I0912 15:23:41.269241    4867 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0912 15:23:41.269247    4867 kubeadm.go:310] 
	I0912 15:23:41.269308    4867 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0912 15:23:41.269347    4867 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 15:23:41.269384    4867 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 15:23:41.269389    4867 kubeadm.go:310] 
	I0912 15:23:41.269429    4867 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 15:23:41.269466    4867 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0912 15:23:41.269469    4867 kubeadm.go:310] 
	I0912 15:23:41.269526    4867 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9batvv.i8tnvzhsrc8b6qr7 \
	I0912 15:23:41.269584    4867 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:927739ba46076d32ef09500def7ebaf4576e192a933c1b27a78721d37c8894ab \
	I0912 15:23:41.269594    4867 kubeadm.go:310] 	--control-plane 
	I0912 15:23:41.269597    4867 kubeadm.go:310] 
	I0912 15:23:41.269646    4867 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0912 15:23:41.269651    4867 kubeadm.go:310] 
	I0912 15:23:41.269690    4867 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9batvv.i8tnvzhsrc8b6qr7 \
	I0912 15:23:41.269749    4867 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:927739ba46076d32ef09500def7ebaf4576e192a933c1b27a78721d37c8894ab 
	I0912 15:23:41.269923    4867 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 15:23:41.269940    4867 cni.go:84] Creating CNI manager for ""
	I0912 15:23:41.269948    4867 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:23:41.273129    4867 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 15:23:41.281117    4867 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 15:23:41.286034    4867 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 15:23:41.290918    4867 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 15:23:41.290990    4867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 15:23:41.290990    4867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-841000 minikube.k8s.io/updated_at=2024_09_12T15_23_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8 minikube.k8s.io/name=stopped-upgrade-841000 minikube.k8s.io/primary=true
	I0912 15:23:41.330801    4867 ops.go:34] apiserver oom_adj: -16
	I0912 15:23:41.330798    4867 kubeadm.go:1113] duration metric: took 39.852958ms to wait for elevateKubeSystemPrivileges
	I0912 15:23:41.330902    4867 kubeadm.go:394] duration metric: took 4m15.211469291s to StartCluster
	I0912 15:23:41.330913    4867 settings.go:142] acquiring lock: {Name:mk5a46170b8bd524e48b63472100abbce9e9531f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:23:41.331002    4867 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:23:41.331421    4867 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/kubeconfig: {Name:mk048c749582c7be36b3ac030be68b87cf483b70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:23:41.331621    4867 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:23:41.336178    4867 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0912 15:23:41.336232    4867 out.go:177] * Verifying Kubernetes components...
	I0912 15:23:41.331687    4867 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0912 15:23:41.336744    4867 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-841000"
	I0912 15:23:41.336758    4867 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-841000"
	W0912 15:23:41.336766    4867 addons.go:243] addon storage-provisioner should already be in state true
	I0912 15:23:41.336779    4867 host.go:66] Checking if "stopped-upgrade-841000" exists ...
	I0912 15:23:41.336824    4867 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-841000"
	I0912 15:23:41.336842    4867 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-841000"
	I0912 15:23:41.337909    4867 kapi.go:59] client config for stopped-upgrade-841000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/client.key", CAFile:"/Users/jenkins/minikube-integration/19616-1259/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1063653d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 15:23:41.340090    4867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 15:23:41.338102    4867 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-841000"
	W0912 15:23:41.340126    4867 addons.go:243] addon default-storageclass should already be in state true
	I0912 15:23:41.340138    4867 host.go:66] Checking if "stopped-upgrade-841000" exists ...
	I0912 15:23:41.340851    4867 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 15:23:41.340859    4867 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 15:23:41.340865    4867 sshutil.go:53] new ssh client: &{IP:localhost Port:50482 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/stopped-upgrade-841000/id_rsa Username:docker}
	I0912 15:23:41.344039    4867 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 15:23:41.352234    4867 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 15:23:41.352242    4867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 15:23:41.352250    4867 sshutil.go:53] new ssh client: &{IP:localhost Port:50482 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/stopped-upgrade-841000/id_rsa Username:docker}
	I0912 15:23:40.202240    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:23:40.202250    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:23:40.206887    4705 logs.go:123] Gathering logs for coredns [a24810cf4146] ...
	I0912 15:23:40.206895    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24810cf4146"
	I0912 15:23:40.218173    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:23:40.218183    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:23:40.233590    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:23:40.233603    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:23:40.245585    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:23:40.245599    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:23:40.263043    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:23:40.263052    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:23:40.297175    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:23:40.297183    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:23:40.331947    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:23:40.331959    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:23:40.357576    4705 logs.go:123] Gathering logs for coredns [4a385dff204c] ...
	I0912 15:23:40.357585    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a385dff204c"
	I0912 15:23:40.370118    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:23:40.370128    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:23:40.385481    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:23:40.385491    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:23:40.397632    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:23:40.397643    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:23:40.409556    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:23:40.409567    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:23:40.430727    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:23:40.430738    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:23:42.947779    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:23:41.406957    4867 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 15:23:41.412132    4867 api_server.go:52] waiting for apiserver process to appear ...
	I0912 15:23:41.412176    4867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 15:23:41.416114    4867 api_server.go:72] duration metric: took 84.484666ms to wait for apiserver process to appear ...
	I0912 15:23:41.416121    4867 api_server.go:88] waiting for apiserver healthz status ...
	I0912 15:23:41.416128    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:23:41.422861    4867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 15:23:41.446035    4867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 15:23:41.798978    4867 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0912 15:23:41.798989    4867 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0912 15:23:47.949961    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:23:47.950150    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:23:47.970211    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:23:47.970290    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:23:47.984062    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:23:47.984140    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:23:47.996166    4705 logs.go:276] 4 containers: [a24810cf4146 4a385dff204c 457bfc5c142a f97fcc59df96]
	I0912 15:23:47.996243    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:23:48.006568    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:23:48.006640    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:23:48.017004    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:23:48.017065    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:23:48.027312    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:23:48.027383    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:23:48.037248    4705 logs.go:276] 0 containers: []
	W0912 15:23:48.037261    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:23:48.037313    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:23:48.047919    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:23:48.047937    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:23:48.047943    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:23:48.059207    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:23:48.059217    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:23:48.071085    4705 logs.go:123] Gathering logs for coredns [4a385dff204c] ...
	I0912 15:23:48.071096    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a385dff204c"
	I0912 15:23:48.083291    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:23:48.083303    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:23:48.099269    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:23:48.099277    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:23:48.110823    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:23:48.110836    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:23:48.134752    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:23:48.134761    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:23:48.169008    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:23:48.169019    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:23:48.189680    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:23:48.189696    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:23:48.208026    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:23:48.208039    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:23:48.219404    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:23:48.219417    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:23:48.223800    4705 logs.go:123] Gathering logs for coredns [a24810cf4146] ...
	I0912 15:23:48.223808    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24810cf4146"
	I0912 15:23:48.235271    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:23:48.235284    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:23:48.246539    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:23:48.246550    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:23:48.278844    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:23:48.278851    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:23:46.418107    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:23:46.418166    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:23:50.795468    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:23:51.418376    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:23:51.418395    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:23:55.797142    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:23:55.797292    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:23:55.812467    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:23:55.812546    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:23:55.825049    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:23:55.825124    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:23:55.838118    4705 logs.go:276] 4 containers: [a24810cf4146 4a385dff204c 457bfc5c142a f97fcc59df96]
	I0912 15:23:55.838208    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:23:55.849666    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:23:55.849737    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:23:55.861373    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:23:55.861440    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:23:55.872225    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:23:55.872290    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:23:55.904053    4705 logs.go:276] 0 containers: []
	W0912 15:23:55.904085    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:23:55.904170    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:23:55.925943    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:23:55.925962    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:23:55.925968    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:23:55.964828    4705 logs.go:123] Gathering logs for coredns [4a385dff204c] ...
	I0912 15:23:55.964842    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a385dff204c"
	I0912 15:23:55.977432    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:23:55.977444    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:23:55.993716    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:23:55.993734    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:23:56.006687    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:23:56.006704    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:23:56.011792    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:23:56.011805    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:23:56.024573    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:23:56.024587    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:23:56.038029    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:23:56.038041    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:23:56.072731    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:23:56.072747    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:23:56.091398    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:23:56.091413    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:23:56.111191    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:23:56.111205    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:23:56.137827    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:23:56.137844    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:23:56.153114    4705 logs.go:123] Gathering logs for coredns [a24810cf4146] ...
	I0912 15:23:56.153126    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24810cf4146"
	I0912 15:23:56.164928    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:23:56.164941    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:23:56.177344    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:23:56.177360    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:23:58.690706    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:23:56.418827    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:23:56.418868    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:24:03.692852    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:24:03.693100    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:24:03.710793    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:24:03.710879    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:24:03.724381    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:24:03.724455    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:24:03.735946    4705 logs.go:276] 4 containers: [a24810cf4146 4a385dff204c 457bfc5c142a f97fcc59df96]
	I0912 15:24:03.736022    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:24:03.746620    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:24:03.746686    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:24:03.757448    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:24:03.757511    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:24:03.768111    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:24:03.768178    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:24:03.778436    4705 logs.go:276] 0 containers: []
	W0912 15:24:03.778447    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:24:03.778499    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:24:03.789147    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:24:03.789165    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:24:03.789170    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:24:03.823679    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:24:03.823686    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:24:03.828528    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:24:03.828536    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:24:03.843004    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:24:03.843015    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:24:03.867310    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:24:03.867320    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:24:03.879099    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:24:03.879110    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:24:03.890283    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:24:03.890296    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:24:03.925534    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:24:03.925545    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:24:03.939866    4705 logs.go:123] Gathering logs for coredns [a24810cf4146] ...
	I0912 15:24:03.939876    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24810cf4146"
	I0912 15:24:03.951844    4705 logs.go:123] Gathering logs for coredns [4a385dff204c] ...
	I0912 15:24:03.951855    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a385dff204c"
	I0912 15:24:03.963915    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:24:03.963925    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:24:03.982211    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:24:03.982226    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:24:04.005699    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:24:04.005707    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:24:04.018505    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:24:04.018522    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:24:04.030470    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:24:04.030487    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:24:01.419241    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:24:01.419280    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:24:06.545693    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:24:06.419800    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:24:06.419821    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:24:11.420483    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:24:11.420544    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0912 15:24:11.799668    4867 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0912 15:24:11.803867    4867 out.go:177] * Enabled addons: storage-provisioner
	I0912 15:24:11.547910    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:24:11.548113    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:24:11.570320    4705 logs.go:276] 1 containers: [9944c51580b6]
	I0912 15:24:11.570408    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:24:11.585760    4705 logs.go:276] 1 containers: [8c3cf9322468]
	I0912 15:24:11.585833    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:24:11.598530    4705 logs.go:276] 4 containers: [a24810cf4146 4a385dff204c 457bfc5c142a f97fcc59df96]
	I0912 15:24:11.598609    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:24:11.609409    4705 logs.go:276] 1 containers: [2ba827c6af3d]
	I0912 15:24:11.609474    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:24:11.619540    4705 logs.go:276] 1 containers: [4b9f98641d42]
	I0912 15:24:11.619611    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:24:11.630048    4705 logs.go:276] 1 containers: [3ff429a57794]
	I0912 15:24:11.630117    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:24:11.639819    4705 logs.go:276] 0 containers: []
	W0912 15:24:11.639829    4705 logs.go:278] No container was found matching "kindnet"
	I0912 15:24:11.639884    4705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:24:11.651661    4705 logs.go:276] 1 containers: [947d2478e4fe]
	I0912 15:24:11.651680    4705 logs.go:123] Gathering logs for kubelet ...
	I0912 15:24:11.651686    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 15:24:11.686029    4705 logs.go:123] Gathering logs for dmesg ...
	I0912 15:24:11.686038    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:24:11.690738    4705 logs.go:123] Gathering logs for kube-proxy [4b9f98641d42] ...
	I0912 15:24:11.690747    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9f98641d42"
	I0912 15:24:11.702329    4705 logs.go:123] Gathering logs for kube-controller-manager [3ff429a57794] ...
	I0912 15:24:11.702339    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ff429a57794"
	I0912 15:24:11.719605    4705 logs.go:123] Gathering logs for container status ...
	I0912 15:24:11.719615    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:24:11.731962    4705 logs.go:123] Gathering logs for kube-apiserver [9944c51580b6] ...
	I0912 15:24:11.731974    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9944c51580b6"
	I0912 15:24:11.746961    4705 logs.go:123] Gathering logs for etcd [8c3cf9322468] ...
	I0912 15:24:11.746976    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3cf9322468"
	I0912 15:24:11.760726    4705 logs.go:123] Gathering logs for coredns [457bfc5c142a] ...
	I0912 15:24:11.760736    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 457bfc5c142a"
	I0912 15:24:11.772748    4705 logs.go:123] Gathering logs for Docker ...
	I0912 15:24:11.772757    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:24:11.796496    4705 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:24:11.796505    4705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:24:11.831295    4705 logs.go:123] Gathering logs for coredns [a24810cf4146] ...
	I0912 15:24:11.831308    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a24810cf4146"
	I0912 15:24:11.842948    4705 logs.go:123] Gathering logs for coredns [4a385dff204c] ...
	I0912 15:24:11.842961    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a385dff204c"
	I0912 15:24:11.854801    4705 logs.go:123] Gathering logs for coredns [f97fcc59df96] ...
	I0912 15:24:11.854813    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f97fcc59df96"
	I0912 15:24:11.873680    4705 logs.go:123] Gathering logs for kube-scheduler [2ba827c6af3d] ...
	I0912 15:24:11.873690    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ba827c6af3d"
	I0912 15:24:11.889518    4705 logs.go:123] Gathering logs for storage-provisioner [947d2478e4fe] ...
	I0912 15:24:11.889532    4705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 947d2478e4fe"
	I0912 15:24:14.403373    4705 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:24:11.815842    4867 addons.go:510] duration metric: took 30.484876458s for enable addons: enabled=[storage-provisioner]
	I0912 15:24:19.405668    4705 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:24:19.410447    4705 out.go:201] 
	W0912 15:24:19.413428    4705 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0912 15:24:19.413433    4705 out.go:270] * 
	W0912 15:24:19.413878    4705 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:24:19.425404    4705 out.go:201] 
	I0912 15:24:16.421545    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:24:16.421596    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:24:21.423140    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:24:21.423178    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:24:26.424759    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:24:26.424786    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Thu 2024-09-12 22:15:18 UTC, ends at Thu 2024-09-12 22:24:35 UTC. --
	Sep 12 22:24:20 running-upgrade-871000 cri-dockerd[3063]: time="2024-09-12T22:24:20Z" level=error msg="ContainerStats resp: {0x400054cec0 linux}"
	Sep 12 22:24:20 running-upgrade-871000 cri-dockerd[3063]: time="2024-09-12T22:24:20Z" level=error msg="ContainerStats resp: {0x4000931100 linux}"
	Sep 12 22:24:20 running-upgrade-871000 cri-dockerd[3063]: time="2024-09-12T22:24:20Z" level=error msg="ContainerStats resp: {0x4000930940 linux}"
	Sep 12 22:24:20 running-upgrade-871000 cri-dockerd[3063]: time="2024-09-12T22:24:20Z" level=error msg="ContainerStats resp: {0x4000a429c0 linux}"
	Sep 12 22:24:20 running-upgrade-871000 cri-dockerd[3063]: time="2024-09-12T22:24:20Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 12 22:24:21 running-upgrade-871000 cri-dockerd[3063]: time="2024-09-12T22:24:21Z" level=error msg="ContainerStats resp: {0x4000772680 linux}"
	Sep 12 22:24:22 running-upgrade-871000 cri-dockerd[3063]: time="2024-09-12T22:24:22Z" level=error msg="ContainerStats resp: {0x4000901100 linux}"
	Sep 12 22:24:22 running-upgrade-871000 cri-dockerd[3063]: time="2024-09-12T22:24:22Z" level=error msg="ContainerStats resp: {0x40009014c0 linux}"
	Sep 12 22:24:22 running-upgrade-871000 cri-dockerd[3063]: time="2024-09-12T22:24:22Z" level=error msg="ContainerStats resp: {0x4000900380 linux}"
	Sep 12 22:24:22 running-upgrade-871000 cri-dockerd[3063]: time="2024-09-12T22:24:22Z" level=error msg="ContainerStats resp: {0x40008fc240 linux}"
	Sep 12 22:24:22 running-upgrade-871000 cri-dockerd[3063]: time="2024-09-12T22:24:22Z" level=error msg="ContainerStats resp: {0x4000901480 linux}"
	Sep 12 22:24:22 running-upgrade-871000 cri-dockerd[3063]: time="2024-09-12T22:24:22Z" level=error msg="ContainerStats resp: {0x40008fc380 linux}"
	Sep 12 22:24:22 running-upgrade-871000 cri-dockerd[3063]: time="2024-09-12T22:24:22Z" level=error msg="ContainerStats resp: {0x40008fc780 linux}"
	Sep 12 22:24:25 running-upgrade-871000 cri-dockerd[3063]: time="2024-09-12T22:24:25Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 12 22:24:30 running-upgrade-871000 cri-dockerd[3063]: time="2024-09-12T22:24:30Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 12 22:24:32 running-upgrade-871000 cri-dockerd[3063]: time="2024-09-12T22:24:32Z" level=error msg="ContainerStats resp: {0x400054d840 linux}"
	Sep 12 22:24:32 running-upgrade-871000 cri-dockerd[3063]: time="2024-09-12T22:24:32Z" level=error msg="ContainerStats resp: {0x400054d980 linux}"
	Sep 12 22:24:33 running-upgrade-871000 cri-dockerd[3063]: time="2024-09-12T22:24:33Z" level=error msg="ContainerStats resp: {0x40009a9780 linux}"
	Sep 12 22:24:34 running-upgrade-871000 cri-dockerd[3063]: time="2024-09-12T22:24:34Z" level=error msg="ContainerStats resp: {0x4000666880 linux}"
	Sep 12 22:24:34 running-upgrade-871000 cri-dockerd[3063]: time="2024-09-12T22:24:34Z" level=error msg="ContainerStats resp: {0x4000666dc0 linux}"
	Sep 12 22:24:34 running-upgrade-871000 cri-dockerd[3063]: time="2024-09-12T22:24:34Z" level=error msg="ContainerStats resp: {0x40008f8c40 linux}"
	Sep 12 22:24:34 running-upgrade-871000 cri-dockerd[3063]: time="2024-09-12T22:24:34Z" level=error msg="ContainerStats resp: {0x40008f8e00 linux}"
	Sep 12 22:24:34 running-upgrade-871000 cri-dockerd[3063]: time="2024-09-12T22:24:34Z" level=error msg="ContainerStats resp: {0x4000667bc0 linux}"
	Sep 12 22:24:34 running-upgrade-871000 cri-dockerd[3063]: time="2024-09-12T22:24:34Z" level=error msg="ContainerStats resp: {0x4000667f80 linux}"
	Sep 12 22:24:34 running-upgrade-871000 cri-dockerd[3063]: time="2024-09-12T22:24:34Z" level=error msg="ContainerStats resp: {0x40008f9d40 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	23addda61792d       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   4596e99eaa254
	90d23148279b0       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   5c177c321fe91
	a24810cf4146f       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   5c177c321fe91
	4a385dff204ca       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   4596e99eaa254
	4b9f98641d424       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   ded9e28865828
	947d2478e4fe8       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   3bf32363546ad
	9944c51580b6b       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   1e3f470c098c6
	2ba827c6af3d3       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   23d5d82e3bb7d
	3ff429a57794f       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   c5b3484d5b0c2
	8c3cf93224687       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   afb82928ad01d
	
	
	==> coredns [23addda61792] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2571328773046219043.3182456272374299280. HINFO: read udp 10.244.0.2:57135->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2571328773046219043.3182456272374299280. HINFO: read udp 10.244.0.2:54063->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2571328773046219043.3182456272374299280. HINFO: read udp 10.244.0.2:34142->10.0.2.3:53: i/o timeout
	
	
	==> coredns [4a385dff204c] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7153288939974344308.8985715823851854019. HINFO: read udp 10.244.0.2:33533->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7153288939974344308.8985715823851854019. HINFO: read udp 10.244.0.2:34703->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7153288939974344308.8985715823851854019. HINFO: read udp 10.244.0.2:45722->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7153288939974344308.8985715823851854019. HINFO: read udp 10.244.0.2:55410->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7153288939974344308.8985715823851854019. HINFO: read udp 10.244.0.2:59309->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7153288939974344308.8985715823851854019. HINFO: read udp 10.244.0.2:43538->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7153288939974344308.8985715823851854019. HINFO: read udp 10.244.0.2:34489->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7153288939974344308.8985715823851854019. HINFO: read udp 10.244.0.2:50044->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7153288939974344308.8985715823851854019. HINFO: read udp 10.244.0.2:40968->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7153288939974344308.8985715823851854019. HINFO: read udp 10.244.0.2:40220->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [90d23148279b] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 333915732557884066.2807889259489942989. HINFO: read udp 10.244.0.3:40232->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 333915732557884066.2807889259489942989. HINFO: read udp 10.244.0.3:46417->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 333915732557884066.2807889259489942989. HINFO: read udp 10.244.0.3:34061->10.0.2.3:53: i/o timeout
	
	
	==> coredns [a24810cf4146] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7999279936093398314.1472440964467956118. HINFO: read udp 10.244.0.3:51606->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7999279936093398314.1472440964467956118. HINFO: read udp 10.244.0.3:44276->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7999279936093398314.1472440964467956118. HINFO: read udp 10.244.0.3:53327->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7999279936093398314.1472440964467956118. HINFO: read udp 10.244.0.3:33488->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7999279936093398314.1472440964467956118. HINFO: read udp 10.244.0.3:55359->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7999279936093398314.1472440964467956118. HINFO: read udp 10.244.0.3:35702->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7999279936093398314.1472440964467956118. HINFO: read udp 10.244.0.3:60339->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7999279936093398314.1472440964467956118. HINFO: read udp 10.244.0.3:52998->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7999279936093398314.1472440964467956118. HINFO: read udp 10.244.0.3:52392->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7999279936093398314.1472440964467956118. HINFO: read udp 10.244.0.3:37548->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-871000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-871000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=running-upgrade-871000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_12T15_20_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 22:20:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-871000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 22:24:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 22:20:18 +0000   Thu, 12 Sep 2024 22:20:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 22:20:18 +0000   Thu, 12 Sep 2024 22:20:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 22:20:18 +0000   Thu, 12 Sep 2024 22:20:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 22:20:18 +0000   Thu, 12 Sep 2024 22:20:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-871000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f1afc1382294ca084bdd6f03f5cadf8
	  System UUID:                0f1afc1382294ca084bdd6f03f5cadf8
	  Boot ID:                    15a502ef-3da4-44db-9d5a-b008c35a7b88
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-55tzb                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-5n2r6                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-871000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m18s
	  kube-system                 kube-apiserver-running-upgrade-871000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-controller-manager-running-upgrade-871000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-proxy-s7654                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-871000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-871000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-871000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-871000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-871000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s   node-controller  Node running-upgrade-871000 event: Registered Node running-upgrade-871000 in Controller
	
	
	==> dmesg <==
	[  +1.599777] systemd-fstab-generator[883]: Ignoring "noauto" for root device
	[  +0.070260] systemd-fstab-generator[894]: Ignoring "noauto" for root device
	[  +0.064823] systemd-fstab-generator[905]: Ignoring "noauto" for root device
	[  +1.138479] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.064784] systemd-fstab-generator[1055]: Ignoring "noauto" for root device
	[  +0.071547] systemd-fstab-generator[1066]: Ignoring "noauto" for root device
	[  +2.260453] systemd-fstab-generator[1295]: Ignoring "noauto" for root device
	[  +8.650353] systemd-fstab-generator[1843]: Ignoring "noauto" for root device
	[  +3.093789] systemd-fstab-generator[2206]: Ignoring "noauto" for root device
	[  +0.142506] systemd-fstab-generator[2240]: Ignoring "noauto" for root device
	[  +0.094197] systemd-fstab-generator[2251]: Ignoring "noauto" for root device
	[  +0.092186] systemd-fstab-generator[2264]: Ignoring "noauto" for root device
	[ +13.110810] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.203620] systemd-fstab-generator[3019]: Ignoring "noauto" for root device
	[  +0.075097] systemd-fstab-generator[3031]: Ignoring "noauto" for root device
	[  +0.068185] systemd-fstab-generator[3042]: Ignoring "noauto" for root device
	[  +0.066838] systemd-fstab-generator[3056]: Ignoring "noauto" for root device
	[Sep12 22:16] systemd-fstab-generator[3211]: Ignoring "noauto" for root device
	[  +3.227624] systemd-fstab-generator[3745]: Ignoring "noauto" for root device
	[  +2.286119] systemd-fstab-generator[4304]: Ignoring "noauto" for root device
	[ +19.360734] kauditd_printk_skb: 68 callbacks suppressed
	[Sep12 22:20] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.371506] systemd-fstab-generator[12370]: Ignoring "noauto" for root device
	[  +5.631035] systemd-fstab-generator[12980]: Ignoring "noauto" for root device
	[  +0.460963] systemd-fstab-generator[13112]: Ignoring "noauto" for root device
	
	
	==> etcd [8c3cf9322468] <==
	{"level":"info","ts":"2024-09-12T22:20:13.846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-09-12T22:20:13.846Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-09-12T22:20:13.846Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-12T22:20:13.846Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-12T22:20:13.846Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-12T22:20:13.846Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-12T22:20:13.846Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-12T22:20:13.938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-12T22:20:13.938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-12T22:20:13.938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-09-12T22:20:13.938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-09-12T22:20:13.938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-12T22:20:13.938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-09-12T22:20:13.938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-12T22:20:13.938Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T22:20:13.939Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T22:20:13.939Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T22:20:13.939Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-871000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-12T22:20:13.939Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T22:20:13.939Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-12T22:20:13.940Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-12T22:20:13.948Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-12T22:20:13.954Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-09-12T22:20:13.954Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-12T22:20:13.954Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 22:24:35 up 9 min,  0 users,  load average: 0.18, 0.30, 0.16
	Linux running-upgrade-871000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [9944c51580b6] <==
	I0912 22:20:15.724803       1 controller.go:611] quota admission added evaluator for: namespaces
	I0912 22:20:15.763588       1 cache.go:39] Caches are synced for autoregister controller
	I0912 22:20:15.763678       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0912 22:20:15.763716       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0912 22:20:15.763751       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0912 22:20:15.780395       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0912 22:20:15.794833       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0912 22:20:16.491784       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0912 22:20:16.668733       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0912 22:20:16.673144       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0912 22:20:16.673164       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0912 22:20:16.809794       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0912 22:20:16.820379       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0912 22:20:16.929439       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0912 22:20:16.931545       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0912 22:20:16.931902       1 controller.go:611] quota admission added evaluator for: endpoints
	I0912 22:20:16.933108       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0912 22:20:17.807608       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0912 22:20:18.532872       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0912 22:20:18.537756       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0912 22:20:18.543004       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0912 22:20:18.583232       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0912 22:20:31.650974       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0912 22:20:31.849787       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0912 22:20:32.391386       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [3ff429a57794] <==
	I0912 22:20:30.949761       1 shared_informer.go:262] Caches are synced for daemon sets
	I0912 22:20:30.949769       1 shared_informer.go:262] Caches are synced for TTL
	I0912 22:20:30.949784       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0912 22:20:30.949912       1 shared_informer.go:262] Caches are synced for PVC protection
	I0912 22:20:30.950419       1 shared_informer.go:262] Caches are synced for HPA
	I0912 22:20:30.999714       1 shared_informer.go:262] Caches are synced for stateful set
	I0912 22:20:31.062965       1 shared_informer.go:262] Caches are synced for taint
	I0912 22:20:31.063039       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0912 22:20:31.063061       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-871000. Assuming now as a timestamp.
	I0912 22:20:31.063079       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0912 22:20:31.063181       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0912 22:20:31.063327       1 event.go:294] "Event occurred" object="running-upgrade-871000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-871000 event: Registered Node running-upgrade-871000 in Controller"
	I0912 22:20:31.099522       1 shared_informer.go:262] Caches are synced for PV protection
	I0912 22:20:31.099615       1 shared_informer.go:262] Caches are synced for persistent volume
	I0912 22:20:31.144430       1 shared_informer.go:262] Caches are synced for resource quota
	I0912 22:20:31.149506       1 shared_informer.go:262] Caches are synced for expand
	I0912 22:20:31.149526       1 shared_informer.go:262] Caches are synced for attach detach
	I0912 22:20:31.150587       1 shared_informer.go:262] Caches are synced for resource quota
	I0912 22:20:31.568837       1 shared_informer.go:262] Caches are synced for garbage collector
	I0912 22:20:31.649269       1 shared_informer.go:262] Caches are synced for garbage collector
	I0912 22:20:31.649281       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0912 22:20:31.652331       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0912 22:20:31.852310       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-s7654"
	I0912 22:20:31.955804       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-55tzb"
	I0912 22:20:31.959991       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-5n2r6"
	
	
	==> kube-proxy [4b9f98641d42] <==
	I0912 22:20:32.378885       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0912 22:20:32.378913       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0912 22:20:32.378938       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0912 22:20:32.388920       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0912 22:20:32.388954       1 server_others.go:206] "Using iptables Proxier"
	I0912 22:20:32.388971       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0912 22:20:32.389100       1 server.go:661] "Version info" version="v1.24.1"
	I0912 22:20:32.389160       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 22:20:32.389412       1 config.go:317] "Starting service config controller"
	I0912 22:20:32.389426       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0912 22:20:32.389438       1 config.go:226] "Starting endpoint slice config controller"
	I0912 22:20:32.389444       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0912 22:20:32.390493       1 config.go:444] "Starting node config controller"
	I0912 22:20:32.390506       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0912 22:20:32.490400       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0912 22:20:32.490431       1 shared_informer.go:262] Caches are synced for service config
	I0912 22:20:32.490552       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [2ba827c6af3d] <==
	W0912 22:20:15.714272       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0912 22:20:15.714322       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0912 22:20:15.714353       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0912 22:20:15.718075       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0912 22:20:15.718190       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0912 22:20:15.718212       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0912 22:20:15.718250       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0912 22:20:15.718271       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0912 22:20:15.718295       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0912 22:20:15.718328       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0912 22:20:15.718388       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 22:20:15.718419       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0912 22:20:16.527510       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0912 22:20:16.527567       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0912 22:20:16.537737       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0912 22:20:16.537751       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0912 22:20:16.547527       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0912 22:20:16.547553       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0912 22:20:16.549171       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0912 22:20:16.549191       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0912 22:20:16.618459       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0912 22:20:16.618477       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0912 22:20:16.758109       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0912 22:20:16.758126       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0912 22:20:16.911968       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Thu 2024-09-12 22:15:18 UTC, ends at Thu 2024-09-12 22:24:35 UTC. --
	Sep 12 22:20:19 running-upgrade-871000 kubelet[12986]: I0912 22:20:19.791243   12986 reconciler.go:157] "Reconciler: start to sync state"
	Sep 12 22:20:20 running-upgrade-871000 kubelet[12986]: E0912 22:20:20.162508   12986 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-871000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-871000"
	Sep 12 22:20:20 running-upgrade-871000 kubelet[12986]: E0912 22:20:20.362395   12986 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-871000\" already exists" pod="kube-system/etcd-running-upgrade-871000"
	Sep 12 22:20:20 running-upgrade-871000 kubelet[12986]: E0912 22:20:20.563105   12986 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-871000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-871000"
	Sep 12 22:20:20 running-upgrade-871000 kubelet[12986]: I0912 22:20:20.757637   12986 request.go:601] Waited for 1.102474536s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Sep 12 22:20:20 running-upgrade-871000 kubelet[12986]: E0912 22:20:20.762023   12986 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-871000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-871000"
	Sep 12 22:20:30 running-upgrade-871000 kubelet[12986]: I0912 22:20:30.870649   12986 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 12 22:20:30 running-upgrade-871000 kubelet[12986]: I0912 22:20:30.871045   12986 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 12 22:20:31 running-upgrade-871000 kubelet[12986]: I0912 22:20:31.067849   12986 topology_manager.go:200] "Topology Admit Handler"
	Sep 12 22:20:31 running-upgrade-871000 kubelet[12986]: I0912 22:20:31.071759   12986 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a64973ca-1a61-4cee-a1e0-e2c197d4ce0e-tmp\") pod \"storage-provisioner\" (UID: \"a64973ca-1a61-4cee-a1e0-e2c197d4ce0e\") " pod="kube-system/storage-provisioner"
	Sep 12 22:20:31 running-upgrade-871000 kubelet[12986]: I0912 22:20:31.071781   12986 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnzjf\" (UniqueName: \"kubernetes.io/projected/a64973ca-1a61-4cee-a1e0-e2c197d4ce0e-kube-api-access-xnzjf\") pod \"storage-provisioner\" (UID: \"a64973ca-1a61-4cee-a1e0-e2c197d4ce0e\") " pod="kube-system/storage-provisioner"
	Sep 12 22:20:31 running-upgrade-871000 kubelet[12986]: I0912 22:20:31.855304   12986 topology_manager.go:200] "Topology Admit Handler"
	Sep 12 22:20:31 running-upgrade-871000 kubelet[12986]: I0912 22:20:31.878901   12986 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/59d09da1-11a2-4c28-a557-cabebd3e1984-kube-proxy\") pod \"kube-proxy-s7654\" (UID: \"59d09da1-11a2-4c28-a557-cabebd3e1984\") " pod="kube-system/kube-proxy-s7654"
	Sep 12 22:20:31 running-upgrade-871000 kubelet[12986]: I0912 22:20:31.878929   12986 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59d09da1-11a2-4c28-a557-cabebd3e1984-lib-modules\") pod \"kube-proxy-s7654\" (UID: \"59d09da1-11a2-4c28-a557-cabebd3e1984\") " pod="kube-system/kube-proxy-s7654"
	Sep 12 22:20:31 running-upgrade-871000 kubelet[12986]: I0912 22:20:31.878940   12986 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59d09da1-11a2-4c28-a557-cabebd3e1984-xtables-lock\") pod \"kube-proxy-s7654\" (UID: \"59d09da1-11a2-4c28-a557-cabebd3e1984\") " pod="kube-system/kube-proxy-s7654"
	Sep 12 22:20:31 running-upgrade-871000 kubelet[12986]: I0912 22:20:31.878950   12986 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phx77\" (UniqueName: \"kubernetes.io/projected/59d09da1-11a2-4c28-a557-cabebd3e1984-kube-api-access-phx77\") pod \"kube-proxy-s7654\" (UID: \"59d09da1-11a2-4c28-a557-cabebd3e1984\") " pod="kube-system/kube-proxy-s7654"
	Sep 12 22:20:31 running-upgrade-871000 kubelet[12986]: I0912 22:20:31.959163   12986 topology_manager.go:200] "Topology Admit Handler"
	Sep 12 22:20:31 running-upgrade-871000 kubelet[12986]: I0912 22:20:31.965065   12986 topology_manager.go:200] "Topology Admit Handler"
	Sep 12 22:20:32 running-upgrade-871000 kubelet[12986]: I0912 22:20:32.079702   12986 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1a283cd-6e45-4191-9997-20204d320795-config-volume\") pod \"coredns-6d4b75cb6d-5n2r6\" (UID: \"c1a283cd-6e45-4191-9997-20204d320795\") " pod="kube-system/coredns-6d4b75cb6d-5n2r6"
	Sep 12 22:20:32 running-upgrade-871000 kubelet[12986]: I0912 22:20:32.079727   12986 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skn46\" (UniqueName: \"kubernetes.io/projected/c1a283cd-6e45-4191-9997-20204d320795-kube-api-access-skn46\") pod \"coredns-6d4b75cb6d-5n2r6\" (UID: \"c1a283cd-6e45-4191-9997-20204d320795\") " pod="kube-system/coredns-6d4b75cb6d-5n2r6"
	Sep 12 22:20:32 running-upgrade-871000 kubelet[12986]: I0912 22:20:32.079740   12986 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8902a062-b290-4417-b5f2-a37d8726152b-config-volume\") pod \"coredns-6d4b75cb6d-55tzb\" (UID: \"8902a062-b290-4417-b5f2-a37d8726152b\") " pod="kube-system/coredns-6d4b75cb6d-55tzb"
	Sep 12 22:20:32 running-upgrade-871000 kubelet[12986]: I0912 22:20:32.079754   12986 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59wnf\" (UniqueName: \"kubernetes.io/projected/8902a062-b290-4417-b5f2-a37d8726152b-kube-api-access-59wnf\") pod \"coredns-6d4b75cb6d-55tzb\" (UID: \"8902a062-b290-4417-b5f2-a37d8726152b\") " pod="kube-system/coredns-6d4b75cb6d-55tzb"
	Sep 12 22:20:32 running-upgrade-871000 kubelet[12986]: I0912 22:20:32.739309   12986 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="5c177c321fe91dec6cc7408a0f0f103e7cb5fd1b7435631e35ea2ab964fc6b52"
	Sep 12 22:24:20 running-upgrade-871000 kubelet[12986]: I0912 22:24:20.879578   12986 scope.go:110] "RemoveContainer" containerID="457bfc5c142aba8a2e15cd52078ed7bff01db5f6826f1fdff04b6201ba1b9af1"
	Sep 12 22:24:20 running-upgrade-871000 kubelet[12986]: I0912 22:24:20.899237   12986 scope.go:110] "RemoveContainer" containerID="f97fcc59df96c53173f66abe7ba5bd4872e23ed572a4f29c3a78a2ea68ac035e"
	
	
	==> storage-provisioner [947d2478e4fe] <==
	I0912 22:20:31.590880       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0912 22:20:31.595499       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0912 22:20:31.595595       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0912 22:20:31.601426       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0912 22:20:31.601390       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e4d567c2-a204-479f-8323-8b8011484c03", APIVersion:"v1", ResourceVersion:"328", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-871000_7b769ca0-9acd-4d34-812e-c8a01ddc3152 became leader
	I0912 22:20:31.601500       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-871000_7b769ca0-9acd-4d34-812e-c8a01ddc3152!
	I0912 22:20:31.702126       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-871000_7b769ca0-9acd-4d34-812e-c8a01ddc3152!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-871000 -n running-upgrade-871000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-871000 -n running-upgrade-871000: exit status 2 (15.612695417s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-871000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-871000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-871000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-871000: (1.129569791s)
--- FAIL: TestRunningBinaryUpgrade (608.78s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.56s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-469000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-469000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.8443305s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-469000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-469000" primary control-plane node in "kubernetes-upgrade-469000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-469000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:17:43.944771    4781 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:17:43.944918    4781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:17:43.944921    4781 out.go:358] Setting ErrFile to fd 2...
	I0912 15:17:43.944924    4781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:17:43.945052    4781 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:17:43.946201    4781 out.go:352] Setting JSON to false
	I0912 15:17:43.963082    4781 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4627,"bootTime":1726174836,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:17:43.963151    4781 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:17:43.969689    4781 out.go:177] * [kubernetes-upgrade-469000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:17:43.976513    4781 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:17:43.976577    4781 notify.go:220] Checking for updates...
	I0912 15:17:43.982490    4781 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:17:43.985502    4781 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:17:43.986768    4781 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:17:43.989511    4781 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:17:43.992530    4781 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:17:43.995849    4781 config.go:182] Loaded profile config "multinode-323000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:17:43.995913    4781 config.go:182] Loaded profile config "running-upgrade-871000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0912 15:17:43.995968    4781 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:17:44.000490    4781 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:17:44.007538    4781 start.go:297] selected driver: qemu2
	I0912 15:17:44.007547    4781 start.go:901] validating driver "qemu2" against <nil>
	I0912 15:17:44.007555    4781 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:17:44.009879    4781 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 15:17:44.012464    4781 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:17:44.015565    4781 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0912 15:17:44.015583    4781 cni.go:84] Creating CNI manager for ""
	I0912 15:17:44.015588    4781 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0912 15:17:44.015613    4781 start.go:340] cluster config:
	{Name:kubernetes-upgrade-469000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-469000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:17:44.019328    4781 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:17:44.026553    4781 out.go:177] * Starting "kubernetes-upgrade-469000" primary control-plane node in "kubernetes-upgrade-469000" cluster
	I0912 15:17:44.030515    4781 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0912 15:17:44.030530    4781 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0912 15:17:44.030541    4781 cache.go:56] Caching tarball of preloaded images
	I0912 15:17:44.030596    4781 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:17:44.030602    4781 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0912 15:17:44.030650    4781 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/kubernetes-upgrade-469000/config.json ...
	I0912 15:17:44.030661    4781 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/kubernetes-upgrade-469000/config.json: {Name:mk72c354113c95890ac5f59c2e0096b1f86756de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:17:44.031011    4781 start.go:360] acquireMachinesLock for kubernetes-upgrade-469000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:17:44.031048    4781 start.go:364] duration metric: took 27.375µs to acquireMachinesLock for "kubernetes-upgrade-469000"
	I0912 15:17:44.031061    4781 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-469000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-469000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:17:44.031092    4781 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:17:44.038600    4781 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 15:17:44.056304    4781 start.go:159] libmachine.API.Create for "kubernetes-upgrade-469000" (driver="qemu2")
	I0912 15:17:44.056333    4781 client.go:168] LocalClient.Create starting
	I0912 15:17:44.056409    4781 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:17:44.056442    4781 main.go:141] libmachine: Decoding PEM data...
	I0912 15:17:44.056452    4781 main.go:141] libmachine: Parsing certificate...
	I0912 15:17:44.056495    4781 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:17:44.056521    4781 main.go:141] libmachine: Decoding PEM data...
	I0912 15:17:44.056529    4781 main.go:141] libmachine: Parsing certificate...
	I0912 15:17:44.056897    4781 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:17:44.220926    4781 main.go:141] libmachine: Creating SSH key...
	I0912 15:17:44.284995    4781 main.go:141] libmachine: Creating Disk image...
	I0912 15:17:44.285005    4781 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:17:44.285282    4781 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubernetes-upgrade-469000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubernetes-upgrade-469000/disk.qcow2
	I0912 15:17:44.294571    4781 main.go:141] libmachine: STDOUT: 
	I0912 15:17:44.294594    4781 main.go:141] libmachine: STDERR: 
	I0912 15:17:44.294656    4781 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubernetes-upgrade-469000/disk.qcow2 +20000M
	I0912 15:17:44.302628    4781 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:17:44.302650    4781 main.go:141] libmachine: STDERR: 
	I0912 15:17:44.302664    4781 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubernetes-upgrade-469000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubernetes-upgrade-469000/disk.qcow2
	I0912 15:17:44.302668    4781 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:17:44.302684    4781 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:17:44.302711    4781 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubernetes-upgrade-469000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubernetes-upgrade-469000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubernetes-upgrade-469000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:e5:bb:d3:8b:cf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubernetes-upgrade-469000/disk.qcow2
	I0912 15:17:44.304316    4781 main.go:141] libmachine: STDOUT: 
	I0912 15:17:44.304338    4781 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:17:44.304357    4781 client.go:171] duration metric: took 248.025125ms to LocalClient.Create
	I0912 15:17:46.304759    4781 start.go:128] duration metric: took 2.273701042s to createHost
	I0912 15:17:46.304835    4781 start.go:83] releasing machines lock for "kubernetes-upgrade-469000", held for 2.273842334s
	W0912 15:17:46.304900    4781 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:17:46.315995    4781 out.go:177] * Deleting "kubernetes-upgrade-469000" in qemu2 ...
	W0912 15:17:46.354864    4781 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:17:46.354907    4781 start.go:729] Will try again in 5 seconds ...
	I0912 15:17:51.356941    4781 start.go:360] acquireMachinesLock for kubernetes-upgrade-469000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:17:51.357573    4781 start.go:364] duration metric: took 538.25µs to acquireMachinesLock for "kubernetes-upgrade-469000"
	I0912 15:17:51.357646    4781 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-469000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-469000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:17:51.357894    4781 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:17:51.365531    4781 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 15:17:51.417834    4781 start.go:159] libmachine.API.Create for "kubernetes-upgrade-469000" (driver="qemu2")
	I0912 15:17:51.417882    4781 client.go:168] LocalClient.Create starting
	I0912 15:17:51.418018    4781 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:17:51.418094    4781 main.go:141] libmachine: Decoding PEM data...
	I0912 15:17:51.418112    4781 main.go:141] libmachine: Parsing certificate...
	I0912 15:17:51.418183    4781 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:17:51.418228    4781 main.go:141] libmachine: Decoding PEM data...
	I0912 15:17:51.418243    4781 main.go:141] libmachine: Parsing certificate...
	I0912 15:17:51.418777    4781 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:17:51.592179    4781 main.go:141] libmachine: Creating SSH key...
	I0912 15:17:51.700212    4781 main.go:141] libmachine: Creating Disk image...
	I0912 15:17:51.700218    4781 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:17:51.700482    4781 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubernetes-upgrade-469000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubernetes-upgrade-469000/disk.qcow2
	I0912 15:17:51.710304    4781 main.go:141] libmachine: STDOUT: 
	I0912 15:17:51.710338    4781 main.go:141] libmachine: STDERR: 
	I0912 15:17:51.710397    4781 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubernetes-upgrade-469000/disk.qcow2 +20000M
	I0912 15:17:51.718979    4781 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:17:51.718994    4781 main.go:141] libmachine: STDERR: 
	I0912 15:17:51.719008    4781 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubernetes-upgrade-469000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubernetes-upgrade-469000/disk.qcow2
	I0912 15:17:51.719013    4781 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:17:51.719024    4781 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:17:51.719056    4781 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubernetes-upgrade-469000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubernetes-upgrade-469000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubernetes-upgrade-469000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:ff:1a:d9:fb:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubernetes-upgrade-469000/disk.qcow2
	I0912 15:17:51.720801    4781 main.go:141] libmachine: STDOUT: 
	I0912 15:17:51.720818    4781 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:17:51.720829    4781 client.go:171] duration metric: took 302.951083ms to LocalClient.Create
	I0912 15:17:53.722881    4781 start.go:128] duration metric: took 2.365026625s to createHost
	I0912 15:17:53.722916    4781 start.go:83] releasing machines lock for "kubernetes-upgrade-469000", held for 2.365388542s
	W0912 15:17:53.723135    4781 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-469000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-469000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:17:53.732418    4781 out.go:201] 
	W0912 15:17:53.739513    4781 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:17:53.739574    4781 out.go:270] * 
	* 
	W0912 15:17:53.740600    4781 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:17:53.751496    4781 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-469000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-469000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-469000: (3.350957417s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-469000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-469000 status --format={{.Host}}: exit status 7 (30.952334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-469000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-469000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.173716834s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-469000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-469000" primary control-plane node in "kubernetes-upgrade-469000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-469000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-469000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:17:57.171627    4819 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:17:57.171779    4819 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:17:57.171783    4819 out.go:358] Setting ErrFile to fd 2...
	I0912 15:17:57.171785    4819 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:17:57.171907    4819 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:17:57.172964    4819 out.go:352] Setting JSON to false
	I0912 15:17:57.189333    4819 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4641,"bootTime":1726174836,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:17:57.189409    4819 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:17:57.193471    4819 out.go:177] * [kubernetes-upgrade-469000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:17:57.200475    4819 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:17:57.200533    4819 notify.go:220] Checking for updates...
	I0912 15:17:57.207382    4819 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:17:57.210391    4819 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:17:57.213455    4819 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:17:57.216364    4819 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:17:57.219412    4819 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:17:57.223663    4819 config.go:182] Loaded profile config "kubernetes-upgrade-469000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0912 15:17:57.223937    4819 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:17:57.228519    4819 out.go:177] * Using the qemu2 driver based on existing profile
	I0912 15:17:57.235390    4819 start.go:297] selected driver: qemu2
	I0912 15:17:57.235395    4819 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-469000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-469000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:17:57.235441    4819 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:17:57.237751    4819 cni.go:84] Creating CNI manager for ""
	I0912 15:17:57.237767    4819 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:17:57.237790    4819 start.go:340] cluster config:
	{Name:kubernetes-upgrade-469000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-469000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:17:57.241311    4819 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:17:57.248371    4819 out.go:177] * Starting "kubernetes-upgrade-469000" primary control-plane node in "kubernetes-upgrade-469000" cluster
	I0912 15:17:57.252398    4819 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 15:17:57.252426    4819 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 15:17:57.252439    4819 cache.go:56] Caching tarball of preloaded images
	I0912 15:17:57.252517    4819 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:17:57.252523    4819 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 15:17:57.252581    4819 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/kubernetes-upgrade-469000/config.json ...
	I0912 15:17:57.253069    4819 start.go:360] acquireMachinesLock for kubernetes-upgrade-469000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:17:57.253097    4819 start.go:364] duration metric: took 20.708µs to acquireMachinesLock for "kubernetes-upgrade-469000"
	I0912 15:17:57.253108    4819 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:17:57.253112    4819 fix.go:54] fixHost starting: 
	I0912 15:17:57.253223    4819 fix.go:112] recreateIfNeeded on kubernetes-upgrade-469000: state=Stopped err=<nil>
	W0912 15:17:57.253231    4819 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 15:17:57.260396    4819 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-469000" ...
	I0912 15:17:57.264514    4819 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:17:57.264562    4819 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubernetes-upgrade-469000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubernetes-upgrade-469000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubernetes-upgrade-469000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:ff:1a:d9:fb:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubernetes-upgrade-469000/disk.qcow2
	I0912 15:17:57.266574    4819 main.go:141] libmachine: STDOUT: 
	I0912 15:17:57.266593    4819 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:17:57.266620    4819 fix.go:56] duration metric: took 13.508083ms for fixHost
	I0912 15:17:57.266624    4819 start.go:83] releasing machines lock for "kubernetes-upgrade-469000", held for 13.522625ms
	W0912 15:17:57.266630    4819 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:17:57.266665    4819 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:17:57.266669    4819 start.go:729] Will try again in 5 seconds ...
	I0912 15:18:02.267483    4819 start.go:360] acquireMachinesLock for kubernetes-upgrade-469000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:18:02.267880    4819 start.go:364] duration metric: took 287.166µs to acquireMachinesLock for "kubernetes-upgrade-469000"
	I0912 15:18:02.268008    4819 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:18:02.268023    4819 fix.go:54] fixHost starting: 
	I0912 15:18:02.268529    4819 fix.go:112] recreateIfNeeded on kubernetes-upgrade-469000: state=Stopped err=<nil>
	W0912 15:18:02.268545    4819 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 15:18:02.273061    4819 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-469000" ...
	I0912 15:18:02.279146    4819 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:18:02.279379    4819 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubernetes-upgrade-469000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubernetes-upgrade-469000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubernetes-upgrade-469000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:ff:1a:d9:fb:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubernetes-upgrade-469000/disk.qcow2
	I0912 15:18:02.285021    4819 main.go:141] libmachine: STDOUT: 
	I0912 15:18:02.285064    4819 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:18:02.285105    4819 fix.go:56] duration metric: took 17.084375ms for fixHost
	I0912 15:18:02.285114    4819 start.go:83] releasing machines lock for "kubernetes-upgrade-469000", held for 17.217959ms
	W0912 15:18:02.285251    4819 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-469000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-469000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:18:02.293999    4819 out.go:201] 
	W0912 15:18:02.297010    4819 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:18:02.297025    4819 out.go:270] * 
	* 
	W0912 15:18:02.298091    4819 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:18:02.308990    4819 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-469000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-469000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-469000 version --output=json: exit status 1 (48.466042ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-469000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-09-12 15:18:02.367487 -0700 PDT m=+3002.835229292
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-469000 -n kubernetes-upgrade-469000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-469000 -n kubernetes-upgrade-469000: exit status 7 (32.331708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-469000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-469000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-469000
--- FAIL: TestKubernetesUpgrade (18.56s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.36s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19616
- KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3733052050/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.36s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.16s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19616
- KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3421110908/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (581.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.829728298 start -p stopped-upgrade-841000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.829728298 start -p stopped-upgrade-841000 --memory=2200 --vm-driver=qemu2 : (39.707135333s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.829728298 -p stopped-upgrade-841000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.829728298 -p stopped-upgrade-841000 stop: (12.126519042s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-841000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0912 15:20:10.003498    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt: no such file or directory" logger="UnhandledError"
E0912 15:21:52.711957    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/functional-384000/client.crt: no such file or directory" logger="UnhandledError"
E0912 15:22:06.929681    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-841000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m49.577982917s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-841000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-841000" primary control-plane node in "stopped-upgrade-841000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-841000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:18:56.369893    4867 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:18:56.370075    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:18:56.370079    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:18:56.370081    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:18:56.370231    4867 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:18:56.371432    4867 out.go:352] Setting JSON to false
	I0912 15:18:56.390539    4867 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4700,"bootTime":1726174836,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:18:56.390655    4867 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:18:56.393759    4867 out.go:177] * [stopped-upgrade-841000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:18:56.399730    4867 notify.go:220] Checking for updates...
	I0912 15:18:56.399740    4867 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:18:56.403757    4867 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:18:56.406715    4867 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:18:56.413710    4867 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:18:56.416710    4867 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:18:56.419729    4867 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:18:56.423040    4867 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0912 15:18:56.426693    4867 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0912 15:18:56.429687    4867 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:18:56.433739    4867 out.go:177] * Using the qemu2 driver based on existing profile
	I0912 15:18:56.439703    4867 start.go:297] selected driver: qemu2
	I0912 15:18:56.439711    4867 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50517 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0912 15:18:56.439772    4867 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:18:56.442367    4867 cni.go:84] Creating CNI manager for ""
	I0912 15:18:56.442385    4867 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:18:56.442412    4867 start.go:340] cluster config:
	{Name:stopped-upgrade-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50517 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0912 15:18:56.442459    4867 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:18:56.449728    4867 out.go:177] * Starting "stopped-upgrade-841000" primary control-plane node in "stopped-upgrade-841000" cluster
	I0912 15:18:56.453710    4867 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0912 15:18:56.453726    4867 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0912 15:18:56.453738    4867 cache.go:56] Caching tarball of preloaded images
	I0912 15:18:56.453794    4867 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:18:56.453799    4867 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0912 15:18:56.453863    4867 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/config.json ...
	I0912 15:18:56.454336    4867 start.go:360] acquireMachinesLock for stopped-upgrade-841000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:18:56.454366    4867 start.go:364] duration metric: took 25.5µs to acquireMachinesLock for "stopped-upgrade-841000"
	I0912 15:18:56.454376    4867 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:18:56.454379    4867 fix.go:54] fixHost starting: 
	I0912 15:18:56.454485    4867 fix.go:112] recreateIfNeeded on stopped-upgrade-841000: state=Stopped err=<nil>
	W0912 15:18:56.454494    4867 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 15:18:56.462666    4867 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-841000" ...
	I0912 15:18:56.466712    4867 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:18:56.466776    4867 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/stopped-upgrade-841000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/stopped-upgrade-841000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/stopped-upgrade-841000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50482-:22,hostfwd=tcp::50483-:2376,hostname=stopped-upgrade-841000 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/stopped-upgrade-841000/disk.qcow2
	I0912 15:18:56.514440    4867 main.go:141] libmachine: STDOUT: 
	I0912 15:18:56.514473    4867 main.go:141] libmachine: STDERR: 
	I0912 15:18:56.514479    4867 main.go:141] libmachine: Waiting for VM to start (ssh -p 50482 docker@127.0.0.1)...
	I0912 15:19:17.248640    4867 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/config.json ...
	I0912 15:19:17.249175    4867 machine.go:93] provisionDockerMachine start ...
	I0912 15:19:17.249301    4867 main.go:141] libmachine: Using SSH client type: native
	I0912 15:19:17.249623    4867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d9bba0] 0x104d9e400 <nil>  [] 0s} localhost 50482 <nil> <nil>}
	I0912 15:19:17.249635    4867 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 15:19:17.334625    4867 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0912 15:19:17.334653    4867 buildroot.go:166] provisioning hostname "stopped-upgrade-841000"
	I0912 15:19:17.334736    4867 main.go:141] libmachine: Using SSH client type: native
	I0912 15:19:17.334948    4867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d9bba0] 0x104d9e400 <nil>  [] 0s} localhost 50482 <nil> <nil>}
	I0912 15:19:17.334958    4867 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-841000 && echo "stopped-upgrade-841000" | sudo tee /etc/hostname
	I0912 15:19:17.416467    4867 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-841000
	
	I0912 15:19:17.416550    4867 main.go:141] libmachine: Using SSH client type: native
	I0912 15:19:17.416721    4867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d9bba0] 0x104d9e400 <nil>  [] 0s} localhost 50482 <nil> <nil>}
	I0912 15:19:17.416735    4867 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-841000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-841000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-841000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 15:19:17.492257    4867 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 15:19:17.492272    4867 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19616-1259/.minikube CaCertPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19616-1259/.minikube}
	I0912 15:19:17.492283    4867 buildroot.go:174] setting up certificates
	I0912 15:19:17.492290    4867 provision.go:84] configureAuth start
	I0912 15:19:17.492300    4867 provision.go:143] copyHostCerts
	I0912 15:19:17.492398    4867 exec_runner.go:144] found /Users/jenkins/minikube-integration/19616-1259/.minikube/cert.pem, removing ...
	I0912 15:19:17.492411    4867 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19616-1259/.minikube/cert.pem
	I0912 15:19:17.492550    4867 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19616-1259/.minikube/cert.pem (1123 bytes)
	I0912 15:19:17.492804    4867 exec_runner.go:144] found /Users/jenkins/minikube-integration/19616-1259/.minikube/key.pem, removing ...
	I0912 15:19:17.492809    4867 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19616-1259/.minikube/key.pem
	I0912 15:19:17.492885    4867 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19616-1259/.minikube/key.pem (1675 bytes)
	I0912 15:19:17.493041    4867 exec_runner.go:144] found /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.pem, removing ...
	I0912 15:19:17.493050    4867 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.pem
	I0912 15:19:17.493127    4867 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.pem (1078 bytes)
	I0912 15:19:17.493261    4867 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-841000 san=[127.0.0.1 localhost minikube stopped-upgrade-841000]
	I0912 15:19:17.615524    4867 provision.go:177] copyRemoteCerts
	I0912 15:19:17.615558    4867 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 15:19:17.615566    4867 sshutil.go:53] new ssh client: &{IP:localhost Port:50482 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/stopped-upgrade-841000/id_rsa Username:docker}
	I0912 15:19:17.651715    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0912 15:19:17.658909    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0912 15:19:17.665998    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 15:19:17.672726    4867 provision.go:87] duration metric: took 180.436208ms to configureAuth
	I0912 15:19:17.672738    4867 buildroot.go:189] setting minikube options for container-runtime
	I0912 15:19:17.672852    4867 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0912 15:19:17.672885    4867 main.go:141] libmachine: Using SSH client type: native
	I0912 15:19:17.672978    4867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d9bba0] 0x104d9e400 <nil>  [] 0s} localhost 50482 <nil> <nil>}
	I0912 15:19:17.672982    4867 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0912 15:19:17.740900    4867 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0912 15:19:17.740910    4867 buildroot.go:70] root file system type: tmpfs
	I0912 15:19:17.740974    4867 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0912 15:19:17.741026    4867 main.go:141] libmachine: Using SSH client type: native
	I0912 15:19:17.741138    4867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d9bba0] 0x104d9e400 <nil>  [] 0s} localhost 50482 <nil> <nil>}
	I0912 15:19:17.741173    4867 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0912 15:19:17.814333    4867 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0912 15:19:17.814385    4867 main.go:141] libmachine: Using SSH client type: native
	I0912 15:19:17.814498    4867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d9bba0] 0x104d9e400 <nil>  [] 0s} localhost 50482 <nil> <nil>}
	I0912 15:19:17.814506    4867 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0912 15:19:18.158657    4867 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0912 15:19:18.158671    4867 machine.go:96] duration metric: took 909.512375ms to provisionDockerMachine
	I0912 15:19:18.158677    4867 start.go:293] postStartSetup for "stopped-upgrade-841000" (driver="qemu2")
	I0912 15:19:18.158684    4867 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 15:19:18.158746    4867 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 15:19:18.158755    4867 sshutil.go:53] new ssh client: &{IP:localhost Port:50482 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/stopped-upgrade-841000/id_rsa Username:docker}
	I0912 15:19:18.195081    4867 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 15:19:18.196364    4867 info.go:137] Remote host: Buildroot 2021.02.12
	I0912 15:19:18.196376    4867 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19616-1259/.minikube/addons for local assets ...
	I0912 15:19:18.196462    4867 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19616-1259/.minikube/files for local assets ...
	I0912 15:19:18.196582    4867 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19616-1259/.minikube/files/etc/ssl/certs/17842.pem -> 17842.pem in /etc/ssl/certs
	I0912 15:19:18.196707    4867 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 15:19:18.199457    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/files/etc/ssl/certs/17842.pem --> /etc/ssl/certs/17842.pem (1708 bytes)
	I0912 15:19:18.207080    4867 start.go:296] duration metric: took 48.394542ms for postStartSetup
	I0912 15:19:18.207093    4867 fix.go:56] duration metric: took 21.753324125s for fixHost
	I0912 15:19:18.207125    4867 main.go:141] libmachine: Using SSH client type: native
	I0912 15:19:18.207226    4867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d9bba0] 0x104d9e400 <nil>  [] 0s} localhost 50482 <nil> <nil>}
	I0912 15:19:18.207231    4867 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 15:19:18.275082    4867 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726179557.991960838
	
	I0912 15:19:18.275091    4867 fix.go:216] guest clock: 1726179557.991960838
	I0912 15:19:18.275095    4867 fix.go:229] Guest: 2024-09-12 15:19:17.991960838 -0700 PDT Remote: 2024-09-12 15:19:18.207095 -0700 PDT m=+21.870066459 (delta=-215.134162ms)
	I0912 15:19:18.275107    4867 fix.go:200] guest clock delta is within tolerance: -215.134162ms
	I0912 15:19:18.275109    4867 start.go:83] releasing machines lock for "stopped-upgrade-841000", held for 21.821351125s
	I0912 15:19:18.275180    4867 ssh_runner.go:195] Run: cat /version.json
	I0912 15:19:18.275184    4867 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 15:19:18.275187    4867 sshutil.go:53] new ssh client: &{IP:localhost Port:50482 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/stopped-upgrade-841000/id_rsa Username:docker}
	I0912 15:19:18.275200    4867 sshutil.go:53] new ssh client: &{IP:localhost Port:50482 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/stopped-upgrade-841000/id_rsa Username:docker}
	W0912 15:19:18.275826    4867 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50482: connect: connection refused
	I0912 15:19:18.275852    4867 retry.go:31] will retry after 312.47196ms: dial tcp [::1]:50482: connect: connection refused
	W0912 15:19:18.647581    4867 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0912 15:19:18.647760    4867 ssh_runner.go:195] Run: systemctl --version
	I0912 15:19:18.653011    4867 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 15:19:18.657195    4867 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 15:19:18.657255    4867 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0912 15:19:18.664030    4867 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0912 15:19:18.672901    4867 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 15:19:18.672918    4867 start.go:495] detecting cgroup driver to use...
	I0912 15:19:18.673033    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 15:19:18.684867    4867 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0912 15:19:18.689179    4867 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0912 15:19:18.692907    4867 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0912 15:19:18.692949    4867 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0912 15:19:18.696601    4867 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 15:19:18.700173    4867 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0912 15:19:18.703794    4867 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 15:19:18.707444    4867 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 15:19:18.710915    4867 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0912 15:19:18.713923    4867 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0912 15:19:18.716711    4867 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0912 15:19:18.719860    4867 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 15:19:18.722763    4867 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 15:19:18.725429    4867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 15:19:18.782192    4867 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0912 15:19:18.788434    4867 start.go:495] detecting cgroup driver to use...
	I0912 15:19:18.788520    4867 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0912 15:19:18.793770    4867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 15:19:18.798662    4867 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 15:19:18.804669    4867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 15:19:18.809417    4867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 15:19:18.813582    4867 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0912 15:19:18.867472    4867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 15:19:18.873096    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 15:19:18.878906    4867 ssh_runner.go:195] Run: which cri-dockerd
	I0912 15:19:18.880138    4867 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0912 15:19:18.883168    4867 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0912 15:19:18.888147    4867 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0912 15:19:18.943435    4867 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0912 15:19:19.028292    4867 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0912 15:19:19.028360    4867 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0912 15:19:19.033413    4867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 15:19:19.115037    4867 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0912 15:19:20.231551    4867 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.116528792s)
	I0912 15:19:20.231610    4867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0912 15:19:20.239632    4867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0912 15:19:20.244663    4867 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0912 15:19:20.302776    4867 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0912 15:19:20.366617    4867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 15:19:20.431731    4867 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0912 15:19:20.438236    4867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0912 15:19:20.442671    4867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 15:19:20.510507    4867 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0912 15:19:20.548414    4867 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0912 15:19:20.548489    4867 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0912 15:19:20.551958    4867 start.go:563] Will wait 60s for crictl version
	I0912 15:19:20.552006    4867 ssh_runner.go:195] Run: which crictl
	I0912 15:19:20.553382    4867 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 15:19:20.568034    4867 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0912 15:19:20.568100    4867 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0912 15:19:20.584420    4867 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0912 15:19:20.605251    4867 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0912 15:19:20.605320    4867 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0912 15:19:20.606632    4867 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 15:19:20.610697    4867 kubeadm.go:883] updating cluster {Name:stopped-upgrade-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50517 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0912 15:19:20.610740    4867 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0912 15:19:20.610781    4867 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0912 15:19:20.621353    4867 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0912 15:19:20.621361    4867 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0912 15:19:20.621405    4867 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0912 15:19:20.624285    4867 ssh_runner.go:195] Run: which lz4
	I0912 15:19:20.625636    4867 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 15:19:20.626929    4867 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 15:19:20.626937    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0912 15:19:21.495884    4867 docker.go:649] duration metric: took 870.301625ms to copy over tarball
	I0912 15:19:21.495947    4867 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 15:19:22.662282    4867 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.1663525s)
	I0912 15:19:22.662298    4867 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 15:19:22.677506    4867 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0912 15:19:22.680373    4867 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0912 15:19:22.685525    4867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 15:19:22.755544    4867 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0912 15:19:24.418700    4867 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.663184542s)
	I0912 15:19:24.418801    4867 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0912 15:19:24.431784    4867 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0912 15:19:24.431792    4867 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0912 15:19:24.431797    4867 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0912 15:19:24.437382    4867 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 15:19:24.439424    4867 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0912 15:19:24.441091    4867 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0912 15:19:24.441104    4867 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 15:19:24.442609    4867 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0912 15:19:24.442919    4867 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0912 15:19:24.444182    4867 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0912 15:19:24.444233    4867 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0912 15:19:24.445778    4867 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0912 15:19:24.445870    4867 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0912 15:19:24.446884    4867 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0912 15:19:24.447204    4867 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0912 15:19:24.448032    4867 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0912 15:19:24.448167    4867 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0912 15:19:24.449064    4867 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0912 15:19:24.449719    4867 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0912 15:19:24.875340    4867 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0912 15:19:24.888168    4867 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0912 15:19:24.889019    4867 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0912 15:19:24.889052    4867 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0912 15:19:24.889086    4867 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0912 15:19:24.899956    4867 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	W0912 15:19:24.908776    4867 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0912 15:19:24.908913    4867 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0912 15:19:24.908978    4867 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0912 15:19:24.910092    4867 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0912 15:19:24.910112    4867 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0912 15:19:24.910142    4867 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0912 15:19:24.910166    4867 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0912 15:19:24.910300    4867 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0912 15:19:24.915343    4867 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0912 15:19:24.915362    4867 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0912 15:19:24.915408    4867 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0912 15:19:24.931487    4867 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0912 15:19:24.931503    4867 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0912 15:19:24.931551    4867 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0912 15:19:24.932102    4867 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0912 15:19:24.932424    4867 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0912 15:19:24.932435    4867 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0912 15:19:24.932464    4867 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0912 15:19:24.941386    4867 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0912 15:19:24.941416    4867 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0912 15:19:24.941431    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0912 15:19:24.941479    4867 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0912 15:19:24.952748    4867 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0912 15:19:24.954453    4867 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0912 15:19:24.954474    4867 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0912 15:19:24.954518    4867 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0912 15:19:24.958830    4867 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0912 15:19:24.958941    4867 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0912 15:19:24.960861    4867 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0912 15:19:24.960868    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0912 15:19:24.969972    4867 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0912 15:19:24.969994    4867 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0912 15:19:24.970009    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0912 15:19:24.991243    4867 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0912 15:19:25.030109    4867 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0912 15:19:25.030133    4867 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0912 15:19:25.030140    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0912 15:19:25.032719    4867 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0912 15:19:25.032737    4867 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0912 15:19:25.032789    4867 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0912 15:19:25.073946    4867 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0912 15:19:25.073979    4867 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0912 15:19:25.074093    4867 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0912 15:19:25.075465    4867 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0912 15:19:25.075477    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0912 15:19:25.287421    4867 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0912 15:19:25.287435    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0912 15:19:25.325307    4867 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0912 15:19:25.325423    4867 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 15:19:25.438303    4867 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0912 15:19:25.438333    4867 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0912 15:19:25.438360    4867 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 15:19:25.438423    4867 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 15:19:25.456195    4867 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0912 15:19:25.456308    4867 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0912 15:19:25.457868    4867 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0912 15:19:25.457880    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0912 15:19:25.486983    4867 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0912 15:19:25.486995    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0912 15:19:25.726664    4867 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0912 15:19:25.726711    4867 cache_images.go:92] duration metric: took 1.294943292s to LoadCachedImages
	W0912 15:19:25.726756    4867 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0912 15:19:25.726762    4867 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0912 15:19:25.726810    4867 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-841000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 15:19:25.726874    4867 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0912 15:19:25.745832    4867 cni.go:84] Creating CNI manager for ""
	I0912 15:19:25.745843    4867 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:19:25.745850    4867 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 15:19:25.745858    4867 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-841000 NodeName:stopped-upgrade-841000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 15:19:25.745928    4867 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-841000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 15:19:25.745997    4867 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0912 15:19:25.749109    4867 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 15:19:25.749137    4867 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 15:19:25.751834    4867 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0912 15:19:25.756868    4867 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 15:19:25.761440    4867 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0912 15:19:25.766601    4867 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0912 15:19:25.767882    4867 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 15:19:25.771662    4867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 15:19:25.850339    4867 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 15:19:25.856382    4867 certs.go:68] Setting up /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000 for IP: 10.0.2.15
	I0912 15:19:25.856390    4867 certs.go:194] generating shared ca certs ...
	I0912 15:19:25.856401    4867 certs.go:226] acquiring lock for ca certs: {Name:mkbb0c3f29ef431420fb2bc7ce1073854ddb346b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:19:25.856592    4867 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.key
	I0912 15:19:25.856645    4867 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/proxy-client-ca.key
	I0912 15:19:25.856651    4867 certs.go:256] generating profile certs ...
	I0912 15:19:25.856730    4867 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/client.key
	I0912 15:19:25.856749    4867 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/apiserver.key.2b74d301
	I0912 15:19:25.856761    4867 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/apiserver.crt.2b74d301 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0912 15:19:25.972407    4867 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/apiserver.crt.2b74d301 ...
	I0912 15:19:25.972423    4867 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/apiserver.crt.2b74d301: {Name:mk752d4681e4ba2454c43b9bc2aa12efe28c4a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:19:25.973118    4867 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/apiserver.key.2b74d301 ...
	I0912 15:19:25.973128    4867 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/apiserver.key.2b74d301: {Name:mk745635a7fb23d1c496549bf805c1c2cc9798a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:19:25.973288    4867 certs.go:381] copying /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/apiserver.crt.2b74d301 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/apiserver.crt
	I0912 15:19:25.973431    4867 certs.go:385] copying /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/apiserver.key.2b74d301 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/apiserver.key
	I0912 15:19:25.973593    4867 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/proxy-client.key
	I0912 15:19:25.973729    4867 certs.go:484] found cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/1784.pem (1338 bytes)
	W0912 15:19:25.973763    4867 certs.go:480] ignoring /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/1784_empty.pem, impossibly tiny 0 bytes
	I0912 15:19:25.973769    4867 certs.go:484] found cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 15:19:25.973793    4867 certs.go:484] found cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem (1078 bytes)
	I0912 15:19:25.973814    4867 certs.go:484] found cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem (1123 bytes)
	I0912 15:19:25.973831    4867 certs.go:484] found cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/key.pem (1675 bytes)
	I0912 15:19:25.973871    4867 certs.go:484] found cert: /Users/jenkins/minikube-integration/19616-1259/.minikube/files/etc/ssl/certs/17842.pem (1708 bytes)
	I0912 15:19:25.974191    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 15:19:25.981196    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 15:19:25.987798    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 15:19:25.995010    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 15:19:26.002600    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0912 15:19:26.009337    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 15:19:26.015927    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 15:19:26.023208    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 15:19:26.030632    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/files/etc/ssl/certs/17842.pem --> /usr/share/ca-certificates/17842.pem (1708 bytes)
	I0912 15:19:26.037445    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 15:19:26.043960    4867 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/1784.pem --> /usr/share/ca-certificates/1784.pem (1338 bytes)
	I0912 15:19:26.051089    4867 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 15:19:26.056383    4867 ssh_runner.go:195] Run: openssl version
	I0912 15:19:26.058309    4867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17842.pem && ln -fs /usr/share/ca-certificates/17842.pem /etc/ssl/certs/17842.pem"
	I0912 15:19:26.061255    4867 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17842.pem
	I0912 15:19:26.062585    4867 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:44 /usr/share/ca-certificates/17842.pem
	I0912 15:19:26.062602    4867 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17842.pem
	I0912 15:19:26.064498    4867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17842.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 15:19:26.067842    4867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 15:19:26.071253    4867 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 15:19:26.072888    4867 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:29 /usr/share/ca-certificates/minikubeCA.pem
	I0912 15:19:26.072909    4867 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 15:19:26.074563    4867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 15:19:26.077389    4867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1784.pem && ln -fs /usr/share/ca-certificates/1784.pem /etc/ssl/certs/1784.pem"
	I0912 15:19:26.080155    4867 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1784.pem
	I0912 15:19:26.081703    4867 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:44 /usr/share/ca-certificates/1784.pem
	I0912 15:19:26.081722    4867 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1784.pem
	I0912 15:19:26.083483    4867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1784.pem /etc/ssl/certs/51391683.0"
	I0912 15:19:26.086894    4867 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 15:19:26.088518    4867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 15:19:26.090417    4867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 15:19:26.092394    4867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 15:19:26.094288    4867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 15:19:26.096146    4867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 15:19:26.098178    4867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 15:19:26.100010    4867 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50517 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0912 15:19:26.100074    4867 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0912 15:19:26.110919    4867 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 15:19:26.114354    4867 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 15:19:26.114360    4867 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 15:19:26.114387    4867 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 15:19:26.117888    4867 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 15:19:26.118171    4867 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-841000" does not appear in /Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:19:26.118271    4867 kubeconfig.go:62] /Users/jenkins/minikube-integration/19616-1259/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-841000" cluster setting kubeconfig missing "stopped-upgrade-841000" context setting]
	I0912 15:19:26.118451    4867 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/kubeconfig: {Name:mk048c749582c7be36b3ac030be68b87cf483b70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:19:26.118910    4867 kapi.go:59] client config for stopped-upgrade-841000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/client.key", CAFile:"/Users/jenkins/minikube-integration/19616-1259/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1063653d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 15:19:26.119260    4867 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 15:19:26.122607    4867 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-841000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0912 15:19:26.122612    4867 kubeadm.go:1160] stopping kube-system containers ...
	I0912 15:19:26.122652    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0912 15:19:26.132965    4867 docker.go:483] Stopping containers: [560d61b775be d3229f85be9b ae93257a08cb bdc9dc70be85 ddfbb03a6103 0273e19b82fe 9b6f02f235a6 73bd0a6b6c8b]
	I0912 15:19:26.133033    4867 ssh_runner.go:195] Run: docker stop 560d61b775be d3229f85be9b ae93257a08cb bdc9dc70be85 ddfbb03a6103 0273e19b82fe 9b6f02f235a6 73bd0a6b6c8b
	I0912 15:19:26.143586    4867 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 15:19:26.149336    4867 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 15:19:26.152018    4867 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 15:19:26.152030    4867 kubeadm.go:157] found existing configuration files:
	
	I0912 15:19:26.152051    4867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/admin.conf
	I0912 15:19:26.154920    4867 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 15:19:26.154943    4867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 15:19:26.157697    4867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/kubelet.conf
	I0912 15:19:26.160045    4867 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 15:19:26.160076    4867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 15:19:26.162886    4867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/controller-manager.conf
	I0912 15:19:26.165395    4867 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 15:19:26.165413    4867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 15:19:26.168094    4867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/scheduler.conf
	I0912 15:19:26.171156    4867 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 15:19:26.171179    4867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 15:19:26.173939    4867 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 15:19:26.176513    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 15:19:26.197466    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 15:19:26.757177    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 15:19:26.863945    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 15:19:26.883205    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 15:19:26.913969    4867 api_server.go:52] waiting for apiserver process to appear ...
	I0912 15:19:26.914035    4867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 15:19:27.416226    4867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 15:19:27.916117    4867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 15:19:27.920302    4867 api_server.go:72] duration metric: took 1.006361541s to wait for apiserver process to appear ...
	I0912 15:19:27.920311    4867 api_server.go:88] waiting for apiserver healthz status ...
	I0912 15:19:27.920320    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:19:32.922390    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:19:32.922422    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:19:37.923048    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:19:37.923092    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:19:42.923537    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:19:42.923593    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:19:47.924294    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:19:47.924326    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:19:52.925066    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:19:52.925120    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:19:57.926257    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:19:57.926305    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:02.927854    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:02.927906    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:07.929864    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:07.929906    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:12.932032    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:12.932054    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:17.934118    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:17.934144    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:22.936239    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:22.936311    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:27.938698    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:27.938799    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:20:27.950434    4867 logs.go:276] 2 containers: [eb0dc5acb005 bdc9dc70be85]
	I0912 15:20:27.950506    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:20:27.961302    4867 logs.go:276] 2 containers: [122be89153d2 d3229f85be9b]
	I0912 15:20:27.961368    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:20:27.971903    4867 logs.go:276] 1 containers: [7cc43947deca]
	I0912 15:20:27.971977    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:20:27.985860    4867 logs.go:276] 2 containers: [a3bda796bcce ae93257a08cb]
	I0912 15:20:27.985927    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:20:27.996661    4867 logs.go:276] 1 containers: [f11c4c968a9a]
	I0912 15:20:27.996725    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:20:28.007240    4867 logs.go:276] 2 containers: [f30094d831c4 560d61b775be]
	I0912 15:20:28.007298    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:20:28.018520    4867 logs.go:276] 0 containers: []
	W0912 15:20:28.018531    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:20:28.018588    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:20:28.029483    4867 logs.go:276] 2 containers: [83b1123d4f7f c39b454144bf]
	I0912 15:20:28.029507    4867 logs.go:123] Gathering logs for storage-provisioner [c39b454144bf] ...
	I0912 15:20:28.029514    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39b454144bf"
	I0912 15:20:28.040385    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:20:28.040396    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:20:28.052670    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:20:28.052685    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:20:28.056763    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:20:28.056770    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:20:28.131497    4867 logs.go:123] Gathering logs for kube-scheduler [a3bda796bcce] ...
	I0912 15:20:28.131509    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bda796bcce"
	I0912 15:20:28.143306    4867 logs.go:123] Gathering logs for kube-scheduler [ae93257a08cb] ...
	I0912 15:20:28.143318    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae93257a08cb"
	I0912 15:20:28.159200    4867 logs.go:123] Gathering logs for kube-controller-manager [f30094d831c4] ...
	I0912 15:20:28.159212    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30094d831c4"
	I0912 15:20:28.177651    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:20:28.177661    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:20:28.214201    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:20:28.214301    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:20:28.215657    4867 logs.go:123] Gathering logs for kube-apiserver [bdc9dc70be85] ...
	I0912 15:20:28.215665    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc9dc70be85"
	I0912 15:20:28.257052    4867 logs.go:123] Gathering logs for etcd [122be89153d2] ...
	I0912 15:20:28.257068    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 122be89153d2"
	I0912 15:20:28.272074    4867 logs.go:123] Gathering logs for coredns [7cc43947deca] ...
	I0912 15:20:28.272085    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc43947deca"
	I0912 15:20:28.283014    4867 logs.go:123] Gathering logs for kube-controller-manager [560d61b775be] ...
	I0912 15:20:28.283027    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560d61b775be"
	I0912 15:20:28.297989    4867 logs.go:123] Gathering logs for storage-provisioner [83b1123d4f7f] ...
	I0912 15:20:28.297999    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b1123d4f7f"
	I0912 15:20:28.309556    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:20:28.309568    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:20:28.335448    4867 logs.go:123] Gathering logs for kube-apiserver [eb0dc5acb005] ...
	I0912 15:20:28.335459    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0dc5acb005"
	I0912 15:20:28.350726    4867 logs.go:123] Gathering logs for kube-proxy [f11c4c968a9a] ...
	I0912 15:20:28.350740    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11c4c968a9a"
	I0912 15:20:28.361800    4867 logs.go:123] Gathering logs for etcd [d3229f85be9b] ...
	I0912 15:20:28.361811    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3229f85be9b"
	I0912 15:20:28.376990    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:20:28.377000    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:20:28.377031    4867 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0912 15:20:28.377035    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:20:28.377038    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:20:28.377042    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:20:28.377045    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:20:38.380925    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:43.383236    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:43.383458    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:20:43.410652    4867 logs.go:276] 2 containers: [eb0dc5acb005 bdc9dc70be85]
	I0912 15:20:43.410752    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:20:43.429435    4867 logs.go:276] 2 containers: [122be89153d2 d3229f85be9b]
	I0912 15:20:43.429507    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:20:43.440548    4867 logs.go:276] 1 containers: [7cc43947deca]
	I0912 15:20:43.440622    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:20:43.452002    4867 logs.go:276] 2 containers: [a3bda796bcce ae93257a08cb]
	I0912 15:20:43.452070    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:20:43.462496    4867 logs.go:276] 1 containers: [f11c4c968a9a]
	I0912 15:20:43.462574    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:20:43.473393    4867 logs.go:276] 2 containers: [f30094d831c4 560d61b775be]
	I0912 15:20:43.473458    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:20:43.483841    4867 logs.go:276] 0 containers: []
	W0912 15:20:43.483853    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:20:43.483912    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:20:43.497638    4867 logs.go:276] 2 containers: [83b1123d4f7f c39b454144bf]
	I0912 15:20:43.497659    4867 logs.go:123] Gathering logs for storage-provisioner [83b1123d4f7f] ...
	I0912 15:20:43.497665    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b1123d4f7f"
	I0912 15:20:43.509183    4867 logs.go:123] Gathering logs for storage-provisioner [c39b454144bf] ...
	I0912 15:20:43.509196    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39b454144bf"
	I0912 15:20:43.520471    4867 logs.go:123] Gathering logs for kube-controller-manager [f30094d831c4] ...
	I0912 15:20:43.520482    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30094d831c4"
	I0912 15:20:43.537913    4867 logs.go:123] Gathering logs for kube-controller-manager [560d61b775be] ...
	I0912 15:20:43.537933    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560d61b775be"
	I0912 15:20:43.552457    4867 logs.go:123] Gathering logs for etcd [122be89153d2] ...
	I0912 15:20:43.552470    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 122be89153d2"
	I0912 15:20:43.566614    4867 logs.go:123] Gathering logs for kube-scheduler [a3bda796bcce] ...
	I0912 15:20:43.566627    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bda796bcce"
	I0912 15:20:43.578608    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:20:43.578621    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:20:43.616067    4867 logs.go:123] Gathering logs for kube-apiserver [eb0dc5acb005] ...
	I0912 15:20:43.616078    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0dc5acb005"
	I0912 15:20:43.631258    4867 logs.go:123] Gathering logs for coredns [7cc43947deca] ...
	I0912 15:20:43.631271    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc43947deca"
	I0912 15:20:43.643546    4867 logs.go:123] Gathering logs for kube-scheduler [ae93257a08cb] ...
	I0912 15:20:43.643557    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae93257a08cb"
	I0912 15:20:43.664703    4867 logs.go:123] Gathering logs for kube-proxy [f11c4c968a9a] ...
	I0912 15:20:43.664718    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11c4c968a9a"
	I0912 15:20:43.677045    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:20:43.677056    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:20:43.702067    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:20:43.702077    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:20:43.740074    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:20:43.740171    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:20:43.741558    4867 logs.go:123] Gathering logs for kube-apiserver [bdc9dc70be85] ...
	I0912 15:20:43.741567    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc9dc70be85"
	I0912 15:20:43.780025    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:20:43.780041    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:20:43.791860    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:20:43.791871    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:20:43.796061    4867 logs.go:123] Gathering logs for etcd [d3229f85be9b] ...
	I0912 15:20:43.796069    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3229f85be9b"
	I0912 15:20:43.811565    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:20:43.811578    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:20:43.811612    4867 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0912 15:20:43.811618    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:20:43.811622    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:20:43.811626    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:20:43.811642    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:20:53.814374    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:20:58.814910    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:20:58.814978    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:20:58.826522    4867 logs.go:276] 2 containers: [eb0dc5acb005 bdc9dc70be85]
	I0912 15:20:58.826596    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:20:58.837844    4867 logs.go:276] 2 containers: [122be89153d2 d3229f85be9b]
	I0912 15:20:58.837922    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:20:58.848848    4867 logs.go:276] 1 containers: [7cc43947deca]
	I0912 15:20:58.848910    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:20:58.860318    4867 logs.go:276] 2 containers: [a3bda796bcce ae93257a08cb]
	I0912 15:20:58.860403    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:20:58.871023    4867 logs.go:276] 1 containers: [f11c4c968a9a]
	I0912 15:20:58.871089    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:20:58.881536    4867 logs.go:276] 2 containers: [f30094d831c4 560d61b775be]
	I0912 15:20:58.881606    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:20:58.892106    4867 logs.go:276] 0 containers: []
	W0912 15:20:58.892119    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:20:58.892174    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:20:58.902757    4867 logs.go:276] 2 containers: [83b1123d4f7f c39b454144bf]
	I0912 15:20:58.902773    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:20:58.902778    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:20:58.907387    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:20:58.907396    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:20:58.942697    4867 logs.go:123] Gathering logs for etcd [d3229f85be9b] ...
	I0912 15:20:58.942708    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3229f85be9b"
	I0912 15:20:58.957238    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:20:58.957249    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:20:58.993595    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:20:58.993687    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:20:58.995023    4867 logs.go:123] Gathering logs for kube-proxy [f11c4c968a9a] ...
	I0912 15:20:58.995027    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11c4c968a9a"
	I0912 15:20:59.006740    4867 logs.go:123] Gathering logs for storage-provisioner [83b1123d4f7f] ...
	I0912 15:20:59.006749    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b1123d4f7f"
	I0912 15:20:59.018075    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:20:59.018087    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:20:59.030273    4867 logs.go:123] Gathering logs for storage-provisioner [c39b454144bf] ...
	I0912 15:20:59.030287    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39b454144bf"
	I0912 15:20:59.041748    4867 logs.go:123] Gathering logs for kube-apiserver [eb0dc5acb005] ...
	I0912 15:20:59.041760    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0dc5acb005"
	I0912 15:20:59.055907    4867 logs.go:123] Gathering logs for kube-apiserver [bdc9dc70be85] ...
	I0912 15:20:59.055918    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc9dc70be85"
	I0912 15:20:59.094395    4867 logs.go:123] Gathering logs for etcd [122be89153d2] ...
	I0912 15:20:59.094408    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 122be89153d2"
	I0912 15:20:59.108199    4867 logs.go:123] Gathering logs for coredns [7cc43947deca] ...
	I0912 15:20:59.108212    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc43947deca"
	I0912 15:20:59.119366    4867 logs.go:123] Gathering logs for kube-scheduler [ae93257a08cb] ...
	I0912 15:20:59.119381    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae93257a08cb"
	I0912 15:20:59.134155    4867 logs.go:123] Gathering logs for kube-controller-manager [f30094d831c4] ...
	I0912 15:20:59.134165    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30094d831c4"
	I0912 15:20:59.151603    4867 logs.go:123] Gathering logs for kube-controller-manager [560d61b775be] ...
	I0912 15:20:59.151615    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560d61b775be"
	I0912 15:20:59.165894    4867 logs.go:123] Gathering logs for kube-scheduler [a3bda796bcce] ...
	I0912 15:20:59.165904    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bda796bcce"
	I0912 15:20:59.177070    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:20:59.177080    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:20:59.202401    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:20:59.202410    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:20:59.202436    4867 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0912 15:20:59.202441    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:20:59.202445    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:20:59.202450    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:20:59.202452    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:21:09.211439    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:21:14.219119    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:21:14.219386    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:21:14.250137    4867 logs.go:276] 2 containers: [eb0dc5acb005 bdc9dc70be85]
	I0912 15:21:14.250273    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:21:14.268223    4867 logs.go:276] 2 containers: [122be89153d2 d3229f85be9b]
	I0912 15:21:14.268333    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:21:14.281878    4867 logs.go:276] 1 containers: [7cc43947deca]
	I0912 15:21:14.281951    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:21:14.293733    4867 logs.go:276] 2 containers: [a3bda796bcce ae93257a08cb]
	I0912 15:21:14.293811    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:21:14.304343    4867 logs.go:276] 1 containers: [f11c4c968a9a]
	I0912 15:21:14.304415    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:21:14.320105    4867 logs.go:276] 2 containers: [f30094d831c4 560d61b775be]
	I0912 15:21:14.320175    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:21:14.330512    4867 logs.go:276] 0 containers: []
	W0912 15:21:14.330523    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:21:14.330585    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:21:14.343560    4867 logs.go:276] 2 containers: [83b1123d4f7f c39b454144bf]
	I0912 15:21:14.343579    4867 logs.go:123] Gathering logs for kube-proxy [f11c4c968a9a] ...
	I0912 15:21:14.343585    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11c4c968a9a"
	I0912 15:21:14.355685    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:21:14.355696    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:21:14.392797    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:21:14.392889    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:21:14.394271    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:21:14.394279    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:21:14.431368    4867 logs.go:123] Gathering logs for etcd [d3229f85be9b] ...
	I0912 15:21:14.431382    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3229f85be9b"
	I0912 15:21:14.446549    4867 logs.go:123] Gathering logs for kube-scheduler [a3bda796bcce] ...
	I0912 15:21:14.446560    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bda796bcce"
	I0912 15:21:14.459525    4867 logs.go:123] Gathering logs for kube-controller-manager [560d61b775be] ...
	I0912 15:21:14.459539    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560d61b775be"
	I0912 15:21:14.474231    4867 logs.go:123] Gathering logs for storage-provisioner [83b1123d4f7f] ...
	I0912 15:21:14.474241    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b1123d4f7f"
	I0912 15:21:14.485924    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:21:14.485934    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:21:14.497975    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:21:14.497987    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:21:14.502241    4867 logs.go:123] Gathering logs for kube-apiserver [bdc9dc70be85] ...
	I0912 15:21:14.502251    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc9dc70be85"
	I0912 15:21:14.539256    4867 logs.go:123] Gathering logs for etcd [122be89153d2] ...
	I0912 15:21:14.539268    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 122be89153d2"
	I0912 15:21:14.553773    4867 logs.go:123] Gathering logs for kube-scheduler [ae93257a08cb] ...
	I0912 15:21:14.553783    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae93257a08cb"
	I0912 15:21:14.569249    4867 logs.go:123] Gathering logs for kube-apiserver [eb0dc5acb005] ...
	I0912 15:21:14.569260    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0dc5acb005"
	I0912 15:21:14.589323    4867 logs.go:123] Gathering logs for coredns [7cc43947deca] ...
	I0912 15:21:14.589334    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc43947deca"
	I0912 15:21:14.600626    4867 logs.go:123] Gathering logs for storage-provisioner [c39b454144bf] ...
	I0912 15:21:14.600638    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39b454144bf"
	I0912 15:21:14.618973    4867 logs.go:123] Gathering logs for kube-controller-manager [f30094d831c4] ...
	I0912 15:21:14.618984    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30094d831c4"
	I0912 15:21:14.636798    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:21:14.636813    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:21:14.662283    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:21:14.662291    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:21:14.662318    4867 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0912 15:21:14.662323    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:21:14.662327    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:21:14.662330    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:21:14.662343    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:21:24.673233    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:21:29.677518    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:21:29.677764    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:21:29.696568    4867 logs.go:276] 2 containers: [eb0dc5acb005 bdc9dc70be85]
	I0912 15:21:29.696660    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:21:29.716020    4867 logs.go:276] 2 containers: [122be89153d2 d3229f85be9b]
	I0912 15:21:29.716092    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:21:29.726732    4867 logs.go:276] 1 containers: [7cc43947deca]
	I0912 15:21:29.726805    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:21:29.737998    4867 logs.go:276] 2 containers: [a3bda796bcce ae93257a08cb]
	I0912 15:21:29.738075    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:21:29.748442    4867 logs.go:276] 1 containers: [f11c4c968a9a]
	I0912 15:21:29.748515    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:21:29.758955    4867 logs.go:276] 2 containers: [f30094d831c4 560d61b775be]
	I0912 15:21:29.759028    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:21:29.768994    4867 logs.go:276] 0 containers: []
	W0912 15:21:29.769006    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:21:29.769061    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:21:29.779685    4867 logs.go:276] 2 containers: [83b1123d4f7f c39b454144bf]
	I0912 15:21:29.779706    4867 logs.go:123] Gathering logs for kube-apiserver [eb0dc5acb005] ...
	I0912 15:21:29.779711    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0dc5acb005"
	I0912 15:21:29.794027    4867 logs.go:123] Gathering logs for etcd [122be89153d2] ...
	I0912 15:21:29.794039    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 122be89153d2"
	I0912 15:21:29.808484    4867 logs.go:123] Gathering logs for kube-scheduler [ae93257a08cb] ...
	I0912 15:21:29.808495    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae93257a08cb"
	I0912 15:21:29.824041    4867 logs.go:123] Gathering logs for storage-provisioner [c39b454144bf] ...
	I0912 15:21:29.824052    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39b454144bf"
	I0912 15:21:29.835693    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:21:29.835704    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:21:29.870527    4867 logs.go:123] Gathering logs for kube-scheduler [a3bda796bcce] ...
	I0912 15:21:29.870540    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bda796bcce"
	I0912 15:21:29.882352    4867 logs.go:123] Gathering logs for kube-proxy [f11c4c968a9a] ...
	I0912 15:21:29.882364    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11c4c968a9a"
	I0912 15:21:29.893793    4867 logs.go:123] Gathering logs for kube-controller-manager [560d61b775be] ...
	I0912 15:21:29.893807    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560d61b775be"
	I0912 15:21:29.908353    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:21:29.908365    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:21:29.945334    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:21:29.945428    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:21:29.946806    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:21:29.946811    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:21:29.950824    4867 logs.go:123] Gathering logs for kube-apiserver [bdc9dc70be85] ...
	I0912 15:21:29.950834    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc9dc70be85"
	I0912 15:21:29.988545    4867 logs.go:123] Gathering logs for etcd [d3229f85be9b] ...
	I0912 15:21:29.988555    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3229f85be9b"
	I0912 15:21:30.002977    4867 logs.go:123] Gathering logs for coredns [7cc43947deca] ...
	I0912 15:21:30.002988    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc43947deca"
	I0912 15:21:30.013993    4867 logs.go:123] Gathering logs for storage-provisioner [83b1123d4f7f] ...
	I0912 15:21:30.014005    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b1123d4f7f"
	I0912 15:21:30.025621    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:21:30.025632    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:21:30.051437    4867 logs.go:123] Gathering logs for kube-controller-manager [f30094d831c4] ...
	I0912 15:21:30.051449    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30094d831c4"
	I0912 15:21:30.074972    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:21:30.074985    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:21:30.087162    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:21:30.087171    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:21:30.087202    4867 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0912 15:21:30.087206    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:21:30.087209    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:21:30.087213    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:21:30.087216    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:21:40.093785    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:21:45.097219    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:21:45.097405    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:21:45.119018    4867 logs.go:276] 2 containers: [eb0dc5acb005 bdc9dc70be85]
	I0912 15:21:45.119105    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:21:45.132278    4867 logs.go:276] 2 containers: [122be89153d2 d3229f85be9b]
	I0912 15:21:45.132352    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:21:45.144191    4867 logs.go:276] 1 containers: [7cc43947deca]
	I0912 15:21:45.144260    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:21:45.156117    4867 logs.go:276] 2 containers: [a3bda796bcce ae93257a08cb]
	I0912 15:21:45.156179    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:21:45.171040    4867 logs.go:276] 1 containers: [f11c4c968a9a]
	I0912 15:21:45.171113    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:21:45.181608    4867 logs.go:276] 2 containers: [f30094d831c4 560d61b775be]
	I0912 15:21:45.181677    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:21:45.198741    4867 logs.go:276] 0 containers: []
	W0912 15:21:45.198756    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:21:45.198815    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:21:45.209199    4867 logs.go:276] 2 containers: [83b1123d4f7f c39b454144bf]
	I0912 15:21:45.209219    4867 logs.go:123] Gathering logs for kube-scheduler [ae93257a08cb] ...
	I0912 15:21:45.209225    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae93257a08cb"
	I0912 15:21:45.224025    4867 logs.go:123] Gathering logs for kube-proxy [f11c4c968a9a] ...
	I0912 15:21:45.224035    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11c4c968a9a"
	I0912 15:21:45.241677    4867 logs.go:123] Gathering logs for kube-controller-manager [f30094d831c4] ...
	I0912 15:21:45.241688    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30094d831c4"
	I0912 15:21:45.259461    4867 logs.go:123] Gathering logs for storage-provisioner [83b1123d4f7f] ...
	I0912 15:21:45.259469    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b1123d4f7f"
	I0912 15:21:45.271878    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:21:45.271889    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:21:45.283161    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:21:45.283172    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:21:45.318824    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:21:45.318922    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:21:45.320254    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:21:45.320258    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:21:45.356957    4867 logs.go:123] Gathering logs for kube-scheduler [a3bda796bcce] ...
	I0912 15:21:45.356968    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bda796bcce"
	I0912 15:21:45.369680    4867 logs.go:123] Gathering logs for kube-controller-manager [560d61b775be] ...
	I0912 15:21:45.369693    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560d61b775be"
	I0912 15:21:45.384474    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:21:45.384486    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:21:45.410000    4867 logs.go:123] Gathering logs for kube-apiserver [eb0dc5acb005] ...
	I0912 15:21:45.410010    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0dc5acb005"
	I0912 15:21:45.424332    4867 logs.go:123] Gathering logs for etcd [d3229f85be9b] ...
	I0912 15:21:45.424342    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3229f85be9b"
	I0912 15:21:45.443286    4867 logs.go:123] Gathering logs for coredns [7cc43947deca] ...
	I0912 15:21:45.443298    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc43947deca"
	I0912 15:21:45.453970    4867 logs.go:123] Gathering logs for storage-provisioner [c39b454144bf] ...
	I0912 15:21:45.453981    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39b454144bf"
	I0912 15:21:45.465428    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:21:45.465439    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:21:45.470230    4867 logs.go:123] Gathering logs for kube-apiserver [bdc9dc70be85] ...
	I0912 15:21:45.470239    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc9dc70be85"
	I0912 15:21:45.508131    4867 logs.go:123] Gathering logs for etcd [122be89153d2] ...
	I0912 15:21:45.508141    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 122be89153d2"
	I0912 15:21:45.527549    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:21:45.527559    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:21:45.527585    4867 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0912 15:21:45.527589    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:21:45.527593    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:21:45.527597    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:21:45.527601    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:21:55.532425    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:22:00.534964    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:22:00.535408    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:22:00.570452    4867 logs.go:276] 2 containers: [eb0dc5acb005 bdc9dc70be85]
	I0912 15:22:00.570585    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:22:00.590553    4867 logs.go:276] 2 containers: [122be89153d2 d3229f85be9b]
	I0912 15:22:00.590647    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:22:00.607852    4867 logs.go:276] 1 containers: [7cc43947deca]
	I0912 15:22:00.607933    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:22:00.619100    4867 logs.go:276] 2 containers: [a3bda796bcce ae93257a08cb]
	I0912 15:22:00.619173    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:22:00.629456    4867 logs.go:276] 1 containers: [f11c4c968a9a]
	I0912 15:22:00.629519    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:22:00.640136    4867 logs.go:276] 2 containers: [f30094d831c4 560d61b775be]
	I0912 15:22:00.640202    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:22:00.651860    4867 logs.go:276] 0 containers: []
	W0912 15:22:00.651873    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:22:00.651936    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:22:00.667086    4867 logs.go:276] 2 containers: [83b1123d4f7f c39b454144bf]
	I0912 15:22:00.667114    4867 logs.go:123] Gathering logs for storage-provisioner [83b1123d4f7f] ...
	I0912 15:22:00.667119    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b1123d4f7f"
	I0912 15:22:00.678721    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:22:00.678732    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:22:00.682964    4867 logs.go:123] Gathering logs for kube-apiserver [eb0dc5acb005] ...
	I0912 15:22:00.682972    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0dc5acb005"
	I0912 15:22:00.698077    4867 logs.go:123] Gathering logs for kube-apiserver [bdc9dc70be85] ...
	I0912 15:22:00.698087    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc9dc70be85"
	I0912 15:22:00.736896    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:22:00.736912    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:22:00.760433    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:22:00.760447    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:22:00.772735    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:22:00.772751    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:22:00.808461    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:22:00.808553    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:22:00.809870    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:22:00.809874    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:22:00.845032    4867 logs.go:123] Gathering logs for coredns [7cc43947deca] ...
	I0912 15:22:00.845043    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc43947deca"
	I0912 15:22:00.855923    4867 logs.go:123] Gathering logs for kube-scheduler [a3bda796bcce] ...
	I0912 15:22:00.855936    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bda796bcce"
	I0912 15:22:00.867782    4867 logs.go:123] Gathering logs for kube-scheduler [ae93257a08cb] ...
	I0912 15:22:00.867793    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae93257a08cb"
	I0912 15:22:00.883412    4867 logs.go:123] Gathering logs for kube-proxy [f11c4c968a9a] ...
	I0912 15:22:00.883422    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11c4c968a9a"
	I0912 15:22:00.895324    4867 logs.go:123] Gathering logs for kube-controller-manager [f30094d831c4] ...
	I0912 15:22:00.895336    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30094d831c4"
	I0912 15:22:00.913310    4867 logs.go:123] Gathering logs for kube-controller-manager [560d61b775be] ...
	I0912 15:22:00.913321    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560d61b775be"
	I0912 15:22:00.927602    4867 logs.go:123] Gathering logs for storage-provisioner [c39b454144bf] ...
	I0912 15:22:00.927615    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39b454144bf"
	I0912 15:22:00.938552    4867 logs.go:123] Gathering logs for etcd [122be89153d2] ...
	I0912 15:22:00.938562    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 122be89153d2"
	I0912 15:22:00.951847    4867 logs.go:123] Gathering logs for etcd [d3229f85be9b] ...
	I0912 15:22:00.951864    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3229f85be9b"
	I0912 15:22:00.966616    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:22:00.966628    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:22:00.966652    4867 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0912 15:22:00.966656    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:22:00.966659    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:22:00.966662    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:22:00.966664    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:22:10.969195    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:22:15.971527    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:22:15.971679    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:22:15.984779    4867 logs.go:276] 2 containers: [eb0dc5acb005 bdc9dc70be85]
	I0912 15:22:15.984856    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:22:15.995815    4867 logs.go:276] 2 containers: [122be89153d2 d3229f85be9b]
	I0912 15:22:15.995886    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:22:16.006342    4867 logs.go:276] 1 containers: [7cc43947deca]
	I0912 15:22:16.006419    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:22:16.017402    4867 logs.go:276] 2 containers: [a3bda796bcce ae93257a08cb]
	I0912 15:22:16.017474    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:22:16.028226    4867 logs.go:276] 1 containers: [f11c4c968a9a]
	I0912 15:22:16.028289    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:22:16.038998    4867 logs.go:276] 2 containers: [f30094d831c4 560d61b775be]
	I0912 15:22:16.039068    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:22:16.049583    4867 logs.go:276] 0 containers: []
	W0912 15:22:16.049594    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:22:16.049651    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:22:16.060459    4867 logs.go:276] 2 containers: [83b1123d4f7f c39b454144bf]
	I0912 15:22:16.060478    4867 logs.go:123] Gathering logs for kube-proxy [f11c4c968a9a] ...
	I0912 15:22:16.060484    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11c4c968a9a"
	I0912 15:22:16.072588    4867 logs.go:123] Gathering logs for kube-controller-manager [560d61b775be] ...
	I0912 15:22:16.072598    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560d61b775be"
	I0912 15:22:16.091718    4867 logs.go:123] Gathering logs for storage-provisioner [83b1123d4f7f] ...
	I0912 15:22:16.091727    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b1123d4f7f"
	I0912 15:22:16.102997    4867 logs.go:123] Gathering logs for storage-provisioner [c39b454144bf] ...
	I0912 15:22:16.103009    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39b454144bf"
	I0912 15:22:16.114247    4867 logs.go:123] Gathering logs for kube-apiserver [bdc9dc70be85] ...
	I0912 15:22:16.114258    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc9dc70be85"
	I0912 15:22:16.151860    4867 logs.go:123] Gathering logs for coredns [7cc43947deca] ...
	I0912 15:22:16.151872    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc43947deca"
	I0912 15:22:16.162899    4867 logs.go:123] Gathering logs for kube-scheduler [a3bda796bcce] ...
	I0912 15:22:16.162913    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bda796bcce"
	I0912 15:22:16.178847    4867 logs.go:123] Gathering logs for kube-scheduler [ae93257a08cb] ...
	I0912 15:22:16.178859    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae93257a08cb"
	I0912 15:22:16.197048    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:22:16.197059    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:22:16.233974    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:22:16.234066    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:22:16.235403    4867 logs.go:123] Gathering logs for kube-apiserver [eb0dc5acb005] ...
	I0912 15:22:16.235407    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0dc5acb005"
	I0912 15:22:16.249249    4867 logs.go:123] Gathering logs for etcd [122be89153d2] ...
	I0912 15:22:16.249262    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 122be89153d2"
	I0912 15:22:16.265734    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:22:16.265747    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:22:16.278042    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:22:16.278058    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:22:16.282557    4867 logs.go:123] Gathering logs for etcd [d3229f85be9b] ...
	I0912 15:22:16.282564    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3229f85be9b"
	I0912 15:22:16.297794    4867 logs.go:123] Gathering logs for kube-controller-manager [f30094d831c4] ...
	I0912 15:22:16.297808    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30094d831c4"
	I0912 15:22:16.315293    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:22:16.315307    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:22:16.350035    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:22:16.350050    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:22:16.372931    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:22:16.372939    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:22:16.372965    4867 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0912 15:22:16.372969    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:22:16.372973    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:22:16.372976    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:22:16.372979    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:22:26.376466    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:22:31.378649    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:22:31.378752    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:22:31.389713    4867 logs.go:276] 2 containers: [eb0dc5acb005 bdc9dc70be85]
	I0912 15:22:31.389791    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:22:31.400838    4867 logs.go:276] 2 containers: [122be89153d2 d3229f85be9b]
	I0912 15:22:31.400910    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:22:31.410796    4867 logs.go:276] 1 containers: [7cc43947deca]
	I0912 15:22:31.410867    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:22:31.421529    4867 logs.go:276] 2 containers: [a3bda796bcce ae93257a08cb]
	I0912 15:22:31.421597    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:22:31.432313    4867 logs.go:276] 1 containers: [f11c4c968a9a]
	I0912 15:22:31.432376    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:22:31.442838    4867 logs.go:276] 2 containers: [f30094d831c4 560d61b775be]
	I0912 15:22:31.442907    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:22:31.452350    4867 logs.go:276] 0 containers: []
	W0912 15:22:31.452360    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:22:31.452424    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:22:31.463068    4867 logs.go:276] 2 containers: [83b1123d4f7f c39b454144bf]
	I0912 15:22:31.463086    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:22:31.463092    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:22:31.499659    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:22:31.499758    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:22:31.501055    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:22:31.501061    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:22:31.505948    4867 logs.go:123] Gathering logs for kube-apiserver [bdc9dc70be85] ...
	I0912 15:22:31.505956    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc9dc70be85"
	I0912 15:22:31.546958    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:22:31.546969    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:22:31.559077    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:22:31.559089    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:22:31.596977    4867 logs.go:123] Gathering logs for kube-apiserver [eb0dc5acb005] ...
	I0912 15:22:31.596987    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0dc5acb005"
	I0912 15:22:31.611202    4867 logs.go:123] Gathering logs for kube-proxy [f11c4c968a9a] ...
	I0912 15:22:31.611214    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11c4c968a9a"
	I0912 15:22:31.625867    4867 logs.go:123] Gathering logs for kube-controller-manager [f30094d831c4] ...
	I0912 15:22:31.625878    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30094d831c4"
	I0912 15:22:31.642959    4867 logs.go:123] Gathering logs for kube-controller-manager [560d61b775be] ...
	I0912 15:22:31.642971    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560d61b775be"
	I0912 15:22:31.656953    4867 logs.go:123] Gathering logs for storage-provisioner [c39b454144bf] ...
	I0912 15:22:31.656963    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39b454144bf"
	I0912 15:22:31.677450    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:22:31.677461    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:22:31.700430    4867 logs.go:123] Gathering logs for etcd [d3229f85be9b] ...
	I0912 15:22:31.700438    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3229f85be9b"
	I0912 15:22:31.717762    4867 logs.go:123] Gathering logs for coredns [7cc43947deca] ...
	I0912 15:22:31.717777    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc43947deca"
	I0912 15:22:31.729452    4867 logs.go:123] Gathering logs for storage-provisioner [83b1123d4f7f] ...
	I0912 15:22:31.729464    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b1123d4f7f"
	I0912 15:22:31.740477    4867 logs.go:123] Gathering logs for etcd [122be89153d2] ...
	I0912 15:22:31.740485    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 122be89153d2"
	I0912 15:22:31.758289    4867 logs.go:123] Gathering logs for kube-scheduler [a3bda796bcce] ...
	I0912 15:22:31.758301    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bda796bcce"
	I0912 15:22:31.770081    4867 logs.go:123] Gathering logs for kube-scheduler [ae93257a08cb] ...
	I0912 15:22:31.770091    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae93257a08cb"
	I0912 15:22:31.785290    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:22:31.785299    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:22:31.785324    4867 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0912 15:22:31.785328    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:22:31.785331    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:22:31.785335    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:22:31.785338    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:22:41.789358    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:22:46.791597    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:22:46.791752    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:22:46.804120    4867 logs.go:276] 2 containers: [eb0dc5acb005 bdc9dc70be85]
	I0912 15:22:46.804197    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:22:46.815069    4867 logs.go:276] 2 containers: [122be89153d2 d3229f85be9b]
	I0912 15:22:46.815133    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:22:46.831113    4867 logs.go:276] 1 containers: [7cc43947deca]
	I0912 15:22:46.831181    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:22:46.841471    4867 logs.go:276] 2 containers: [a3bda796bcce ae93257a08cb]
	I0912 15:22:46.841549    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:22:46.851727    4867 logs.go:276] 1 containers: [f11c4c968a9a]
	I0912 15:22:46.851795    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:22:46.862028    4867 logs.go:276] 2 containers: [f30094d831c4 560d61b775be]
	I0912 15:22:46.862108    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:22:46.872983    4867 logs.go:276] 0 containers: []
	W0912 15:22:46.872999    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:22:46.873062    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:22:46.883832    4867 logs.go:276] 2 containers: [83b1123d4f7f c39b454144bf]
	I0912 15:22:46.883854    4867 logs.go:123] Gathering logs for kube-apiserver [eb0dc5acb005] ...
	I0912 15:22:46.883859    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0dc5acb005"
	I0912 15:22:46.897592    4867 logs.go:123] Gathering logs for kube-apiserver [bdc9dc70be85] ...
	I0912 15:22:46.897601    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc9dc70be85"
	I0912 15:22:46.935791    4867 logs.go:123] Gathering logs for coredns [7cc43947deca] ...
	I0912 15:22:46.935804    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc43947deca"
	I0912 15:22:46.946869    4867 logs.go:123] Gathering logs for storage-provisioner [c39b454144bf] ...
	I0912 15:22:46.946881    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39b454144bf"
	I0912 15:22:46.958175    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:22:46.958188    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:22:46.981325    4867 logs.go:123] Gathering logs for etcd [122be89153d2] ...
	I0912 15:22:46.981335    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 122be89153d2"
	I0912 15:22:46.995634    4867 logs.go:123] Gathering logs for kube-controller-manager [560d61b775be] ...
	I0912 15:22:46.995650    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560d61b775be"
	I0912 15:22:47.010819    4867 logs.go:123] Gathering logs for storage-provisioner [83b1123d4f7f] ...
	I0912 15:22:47.010835    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b1123d4f7f"
	I0912 15:22:47.022642    4867 logs.go:123] Gathering logs for kube-scheduler [ae93257a08cb] ...
	I0912 15:22:47.022654    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae93257a08cb"
	I0912 15:22:47.038394    4867 logs.go:123] Gathering logs for kube-proxy [f11c4c968a9a] ...
	I0912 15:22:47.038407    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11c4c968a9a"
	I0912 15:22:47.050302    4867 logs.go:123] Gathering logs for kube-controller-manager [f30094d831c4] ...
	I0912 15:22:47.050314    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30094d831c4"
	I0912 15:22:47.070138    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:22:47.070150    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:22:47.105237    4867 logs.go:123] Gathering logs for etcd [d3229f85be9b] ...
	I0912 15:22:47.105252    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3229f85be9b"
	I0912 15:22:47.120628    4867 logs.go:123] Gathering logs for kube-scheduler [a3bda796bcce] ...
	I0912 15:22:47.120640    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bda796bcce"
	I0912 15:22:47.140072    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:22:47.140087    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:22:47.178999    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:22:47.179095    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:22:47.180485    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:22:47.180495    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:22:47.185293    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:22:47.185303    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:22:47.196957    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:22:47.196967    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:22:47.196992    4867 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0912 15:22:47.196997    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:22:47.197000    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:22:47.197003    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:22:47.197006    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:22:57.200941    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:23:02.201455    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:23:02.201556    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:23:02.212645    4867 logs.go:276] 2 containers: [eb0dc5acb005 bdc9dc70be85]
	I0912 15:23:02.212709    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:23:02.224146    4867 logs.go:276] 2 containers: [122be89153d2 d3229f85be9b]
	I0912 15:23:02.224221    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:23:02.237390    4867 logs.go:276] 1 containers: [7cc43947deca]
	I0912 15:23:02.237458    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:23:02.248074    4867 logs.go:276] 2 containers: [a3bda796bcce ae93257a08cb]
	I0912 15:23:02.248148    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:23:02.258342    4867 logs.go:276] 1 containers: [f11c4c968a9a]
	I0912 15:23:02.258415    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:23:02.273528    4867 logs.go:276] 2 containers: [f30094d831c4 560d61b775be]
	I0912 15:23:02.273598    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:23:02.283701    4867 logs.go:276] 0 containers: []
	W0912 15:23:02.283713    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:23:02.283772    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:23:02.294749    4867 logs.go:276] 2 containers: [83b1123d4f7f c39b454144bf]
	I0912 15:23:02.294766    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:23:02.294773    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:23:02.335225    4867 logs.go:123] Gathering logs for etcd [d3229f85be9b] ...
	I0912 15:23:02.335238    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3229f85be9b"
	I0912 15:23:02.352625    4867 logs.go:123] Gathering logs for kube-proxy [f11c4c968a9a] ...
	I0912 15:23:02.352636    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11c4c968a9a"
	I0912 15:23:02.364353    4867 logs.go:123] Gathering logs for storage-provisioner [c39b454144bf] ...
	I0912 15:23:02.364366    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39b454144bf"
	I0912 15:23:02.375833    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:23:02.375846    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:23:02.413255    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:23:02.413348    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:23:02.414689    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:23:02.414695    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:23:02.427498    4867 logs.go:123] Gathering logs for kube-apiserver [eb0dc5acb005] ...
	I0912 15:23:02.427510    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0dc5acb005"
	I0912 15:23:02.441780    4867 logs.go:123] Gathering logs for etcd [122be89153d2] ...
	I0912 15:23:02.441794    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 122be89153d2"
	I0912 15:23:02.456229    4867 logs.go:123] Gathering logs for kube-scheduler [a3bda796bcce] ...
	I0912 15:23:02.456242    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bda796bcce"
	I0912 15:23:02.471552    4867 logs.go:123] Gathering logs for storage-provisioner [83b1123d4f7f] ...
	I0912 15:23:02.471564    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b1123d4f7f"
	I0912 15:23:02.485669    4867 logs.go:123] Gathering logs for kube-controller-manager [560d61b775be] ...
	I0912 15:23:02.485684    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560d61b775be"
	I0912 15:23:02.499316    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:23:02.499331    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:23:02.522923    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:23:02.522931    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:23:02.527387    4867 logs.go:123] Gathering logs for kube-apiserver [bdc9dc70be85] ...
	I0912 15:23:02.527396    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc9dc70be85"
	I0912 15:23:02.567555    4867 logs.go:123] Gathering logs for coredns [7cc43947deca] ...
	I0912 15:23:02.567566    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc43947deca"
	I0912 15:23:02.583433    4867 logs.go:123] Gathering logs for kube-scheduler [ae93257a08cb] ...
	I0912 15:23:02.583445    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae93257a08cb"
	I0912 15:23:02.603163    4867 logs.go:123] Gathering logs for kube-controller-manager [f30094d831c4] ...
	I0912 15:23:02.603177    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30094d831c4"
	I0912 15:23:02.620934    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:23:02.620946    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:23:02.620975    4867 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0912 15:23:02.620979    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:23:02.620983    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:23:02.620987    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:23:02.620989    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:23:12.624137    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:23:17.626445    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:23:17.626766    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:23:17.661809    4867 logs.go:276] 2 containers: [eb0dc5acb005 bdc9dc70be85]
	I0912 15:23:17.661941    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:23:17.690228    4867 logs.go:276] 2 containers: [122be89153d2 d3229f85be9b]
	I0912 15:23:17.690318    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:23:17.703803    4867 logs.go:276] 1 containers: [7cc43947deca]
	I0912 15:23:17.703879    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:23:17.715111    4867 logs.go:276] 2 containers: [a3bda796bcce ae93257a08cb]
	I0912 15:23:17.715184    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:23:17.730193    4867 logs.go:276] 1 containers: [f11c4c968a9a]
	I0912 15:23:17.730260    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:23:17.740910    4867 logs.go:276] 2 containers: [f30094d831c4 560d61b775be]
	I0912 15:23:17.740982    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:23:17.752103    4867 logs.go:276] 0 containers: []
	W0912 15:23:17.752116    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:23:17.752176    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:23:17.763666    4867 logs.go:276] 2 containers: [83b1123d4f7f c39b454144bf]
	I0912 15:23:17.763686    4867 logs.go:123] Gathering logs for kube-apiserver [eb0dc5acb005] ...
	I0912 15:23:17.763691    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0dc5acb005"
	I0912 15:23:17.778532    4867 logs.go:123] Gathering logs for coredns [7cc43947deca] ...
	I0912 15:23:17.778543    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc43947deca"
	I0912 15:23:17.789985    4867 logs.go:123] Gathering logs for kube-proxy [f11c4c968a9a] ...
	I0912 15:23:17.789996    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11c4c968a9a"
	I0912 15:23:17.802196    4867 logs.go:123] Gathering logs for kube-controller-manager [560d61b775be] ...
	I0912 15:23:17.802207    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560d61b775be"
	I0912 15:23:17.817806    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:23:17.817814    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:23:17.839967    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:23:17.839976    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:23:17.874831    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:23:17.874923    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:23:17.876272    4867 logs.go:123] Gathering logs for kube-apiserver [bdc9dc70be85] ...
	I0912 15:23:17.876279    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc9dc70be85"
	I0912 15:23:17.913850    4867 logs.go:123] Gathering logs for kube-scheduler [a3bda796bcce] ...
	I0912 15:23:17.913860    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bda796bcce"
	I0912 15:23:17.926335    4867 logs.go:123] Gathering logs for kube-controller-manager [f30094d831c4] ...
	I0912 15:23:17.926345    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f30094d831c4"
	I0912 15:23:17.943728    4867 logs.go:123] Gathering logs for storage-provisioner [c39b454144bf] ...
	I0912 15:23:17.943738    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39b454144bf"
	I0912 15:23:17.954536    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:23:17.954547    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:23:17.967827    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:23:17.967838    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:23:17.972445    4867 logs.go:123] Gathering logs for etcd [122be89153d2] ...
	I0912 15:23:17.972452    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 122be89153d2"
	I0912 15:23:17.986405    4867 logs.go:123] Gathering logs for kube-scheduler [ae93257a08cb] ...
	I0912 15:23:17.986414    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae93257a08cb"
	I0912 15:23:18.008526    4867 logs.go:123] Gathering logs for storage-provisioner [83b1123d4f7f] ...
	I0912 15:23:18.008537    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b1123d4f7f"
	I0912 15:23:18.020308    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:23:18.020319    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:23:18.054563    4867 logs.go:123] Gathering logs for etcd [d3229f85be9b] ...
	I0912 15:23:18.054573    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3229f85be9b"
	I0912 15:23:18.070114    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:23:18.070123    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:23:18.070152    4867 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0912 15:23:18.070156    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:23:18.070159    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:23:18.070163    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:23:18.070166    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:23:28.074164    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:23:33.076587    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:23:33.076714    4867 kubeadm.go:597] duration metric: took 4m6.942738333s to restartPrimaryControlPlane
	W0912 15:23:33.076783    4867 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0912 15:23:33.076814    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0912 15:23:34.079651    4867 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.002847958s)
	I0912 15:23:34.080009    4867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 15:23:34.085079    4867 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 15:23:34.088185    4867 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 15:23:34.090744    4867 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 15:23:34.090750    4867 kubeadm.go:157] found existing configuration files:
	
	I0912 15:23:34.090771    4867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/admin.conf
	I0912 15:23:34.093207    4867 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 15:23:34.093230    4867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 15:23:34.096490    4867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/kubelet.conf
	I0912 15:23:34.099280    4867 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 15:23:34.099306    4867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 15:23:34.101862    4867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/controller-manager.conf
	I0912 15:23:34.104891    4867 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 15:23:34.104909    4867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 15:23:34.107819    4867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/scheduler.conf
	I0912 15:23:34.110253    4867 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 15:23:34.110272    4867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 15:23:34.113188    4867 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 15:23:34.130789    4867 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0912 15:23:34.130834    4867 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 15:23:34.178015    4867 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 15:23:34.178069    4867 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 15:23:34.178132    4867 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 15:23:34.232876    4867 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 15:23:34.237086    4867 out.go:235]   - Generating certificates and keys ...
	I0912 15:23:34.237125    4867 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 15:23:34.237164    4867 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 15:23:34.237206    4867 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0912 15:23:34.237241    4867 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0912 15:23:34.237272    4867 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0912 15:23:34.237299    4867 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0912 15:23:34.237334    4867 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0912 15:23:34.237365    4867 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0912 15:23:34.237399    4867 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0912 15:23:34.237438    4867 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0912 15:23:34.237464    4867 kubeadm.go:310] [certs] Using the existing "sa" key
	I0912 15:23:34.237499    4867 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 15:23:34.526335    4867 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 15:23:34.566242    4867 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 15:23:34.633489    4867 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 15:23:34.727055    4867 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 15:23:34.759283    4867 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 15:23:34.759654    4867 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 15:23:34.759678    4867 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 15:23:34.826447    4867 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 15:23:34.830439    4867 out.go:235]   - Booting up control plane ...
	I0912 15:23:34.830490    4867 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 15:23:34.830528    4867 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 15:23:34.830576    4867 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 15:23:34.830762    4867 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 15:23:34.831603    4867 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 15:23:39.833324    4867 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.001087 seconds
	I0912 15:23:39.833419    4867 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 15:23:39.836920    4867 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 15:23:40.344863    4867 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 15:23:40.345037    4867 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-841000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0912 15:23:40.854156    4867 kubeadm.go:310] [bootstrap-token] Using token: 9batvv.i8tnvzhsrc8b6qr7
	I0912 15:23:40.858295    4867 out.go:235]   - Configuring RBAC rules ...
	I0912 15:23:40.858359    4867 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 15:23:40.866287    4867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 15:23:40.868459    4867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 15:23:40.869462    4867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 15:23:40.870346    4867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 15:23:40.871182    4867 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 15:23:40.874672    4867 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 15:23:41.053029    4867 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0912 15:23:41.268398    4867 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0912 15:23:41.268969    4867 kubeadm.go:310] 
	I0912 15:23:41.269000    4867 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0912 15:23:41.269006    4867 kubeadm.go:310] 
	I0912 15:23:41.269038    4867 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0912 15:23:41.269044    4867 kubeadm.go:310] 
	I0912 15:23:41.269060    4867 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0912 15:23:41.269091    4867 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 15:23:41.269116    4867 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 15:23:41.269134    4867 kubeadm.go:310] 
	I0912 15:23:41.269157    4867 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0912 15:23:41.269178    4867 kubeadm.go:310] 
	I0912 15:23:41.269241    4867 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0912 15:23:41.269247    4867 kubeadm.go:310] 
	I0912 15:23:41.269308    4867 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0912 15:23:41.269347    4867 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 15:23:41.269384    4867 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 15:23:41.269389    4867 kubeadm.go:310] 
	I0912 15:23:41.269429    4867 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 15:23:41.269466    4867 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0912 15:23:41.269469    4867 kubeadm.go:310] 
	I0912 15:23:41.269526    4867 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9batvv.i8tnvzhsrc8b6qr7 \
	I0912 15:23:41.269584    4867 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:927739ba46076d32ef09500def7ebaf4576e192a933c1b27a78721d37c8894ab \
	I0912 15:23:41.269594    4867 kubeadm.go:310] 	--control-plane 
	I0912 15:23:41.269597    4867 kubeadm.go:310] 
	I0912 15:23:41.269646    4867 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0912 15:23:41.269651    4867 kubeadm.go:310] 
	I0912 15:23:41.269690    4867 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9batvv.i8tnvzhsrc8b6qr7 \
	I0912 15:23:41.269749    4867 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:927739ba46076d32ef09500def7ebaf4576e192a933c1b27a78721d37c8894ab 
	I0912 15:23:41.269923    4867 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 15:23:41.269940    4867 cni.go:84] Creating CNI manager for ""
	I0912 15:23:41.269948    4867 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:23:41.273129    4867 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 15:23:41.281117    4867 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 15:23:41.286034    4867 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 15:23:41.290918    4867 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 15:23:41.290990    4867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 15:23:41.290990    4867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-841000 minikube.k8s.io/updated_at=2024_09_12T15_23_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8 minikube.k8s.io/name=stopped-upgrade-841000 minikube.k8s.io/primary=true
	I0912 15:23:41.330801    4867 ops.go:34] apiserver oom_adj: -16
	I0912 15:23:41.330798    4867 kubeadm.go:1113] duration metric: took 39.852958ms to wait for elevateKubeSystemPrivileges
	I0912 15:23:41.330902    4867 kubeadm.go:394] duration metric: took 4m15.211469291s to StartCluster
	I0912 15:23:41.330913    4867 settings.go:142] acquiring lock: {Name:mk5a46170b8bd524e48b63472100abbce9e9531f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:23:41.331002    4867 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:23:41.331421    4867 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/kubeconfig: {Name:mk048c749582c7be36b3ac030be68b87cf483b70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:23:41.331621    4867 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:23:41.336178    4867 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0912 15:23:41.336232    4867 out.go:177] * Verifying Kubernetes components...
	I0912 15:23:41.331687    4867 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0912 15:23:41.336744    4867 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-841000"
	I0912 15:23:41.336758    4867 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-841000"
	W0912 15:23:41.336766    4867 addons.go:243] addon storage-provisioner should already be in state true
	I0912 15:23:41.336779    4867 host.go:66] Checking if "stopped-upgrade-841000" exists ...
	I0912 15:23:41.336824    4867 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-841000"
	I0912 15:23:41.336842    4867 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-841000"
	I0912 15:23:41.337909    4867 kapi.go:59] client config for stopped-upgrade-841000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/stopped-upgrade-841000/client.key", CAFile:"/Users/jenkins/minikube-integration/19616-1259/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1063653d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 15:23:41.340090    4867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 15:23:41.338102    4867 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-841000"
	W0912 15:23:41.340126    4867 addons.go:243] addon default-storageclass should already be in state true
	I0912 15:23:41.340138    4867 host.go:66] Checking if "stopped-upgrade-841000" exists ...
	I0912 15:23:41.340851    4867 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 15:23:41.340859    4867 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 15:23:41.340865    4867 sshutil.go:53] new ssh client: &{IP:localhost Port:50482 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/stopped-upgrade-841000/id_rsa Username:docker}
	I0912 15:23:41.344039    4867 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 15:23:41.352234    4867 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 15:23:41.352242    4867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 15:23:41.352250    4867 sshutil.go:53] new ssh client: &{IP:localhost Port:50482 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/stopped-upgrade-841000/id_rsa Username:docker}
	I0912 15:23:41.406957    4867 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 15:23:41.412132    4867 api_server.go:52] waiting for apiserver process to appear ...
	I0912 15:23:41.412176    4867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 15:23:41.416114    4867 api_server.go:72] duration metric: took 84.484666ms to wait for apiserver process to appear ...
	I0912 15:23:41.416121    4867 api_server.go:88] waiting for apiserver healthz status ...
	I0912 15:23:41.416128    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:23:41.422861    4867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 15:23:41.446035    4867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 15:23:41.798978    4867 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0912 15:23:41.798989    4867 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0912 15:23:46.418107    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:23:46.418166    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:23:51.418376    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:23:51.418395    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:23:56.418827    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:23:56.418868    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:24:01.419241    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:24:01.419280    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:24:06.419800    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:24:06.419821    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:24:11.420483    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:24:11.420544    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0912 15:24:11.799668    4867 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0912 15:24:11.803867    4867 out.go:177] * Enabled addons: storage-provisioner
	I0912 15:24:11.815842    4867 addons.go:510] duration metric: took 30.484876458s for enable addons: enabled=[storage-provisioner]
	I0912 15:24:16.421545    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:24:16.421596    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:24:21.423140    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:24:21.423178    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:24:26.424759    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:24:26.424786    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:24:31.426617    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:24:31.426661    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:24:36.428809    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:24:36.428848    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:24:41.431020    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:24:41.431243    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:24:41.459778    4867 logs.go:276] 1 containers: [7d37fcad59dd]
	I0912 15:24:41.459871    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:24:41.489596    4867 logs.go:276] 1 containers: [382510bca053]
	I0912 15:24:41.489677    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:24:41.513884    4867 logs.go:276] 2 containers: [25b2c07dd7e1 2e0f4e843685]
	I0912 15:24:41.513951    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:24:41.528297    4867 logs.go:276] 1 containers: [9e5639f89ee1]
	I0912 15:24:41.528371    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:24:41.538903    4867 logs.go:276] 1 containers: [ff05a057c1bb]
	I0912 15:24:41.538969    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:24:41.551217    4867 logs.go:276] 1 containers: [560561b8e67c]
	I0912 15:24:41.551287    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:24:41.561767    4867 logs.go:276] 0 containers: []
	W0912 15:24:41.561779    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:24:41.561840    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:24:41.571754    4867 logs.go:276] 1 containers: [fc6668f65da9]
	I0912 15:24:41.571768    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:24:41.571773    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:24:41.606422    4867 logs.go:123] Gathering logs for etcd [382510bca053] ...
	I0912 15:24:41.606438    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 382510bca053"
	I0912 15:24:41.620505    4867 logs.go:123] Gathering logs for coredns [25b2c07dd7e1] ...
	I0912 15:24:41.620518    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b2c07dd7e1"
	I0912 15:24:41.632037    4867 logs.go:123] Gathering logs for kube-scheduler [9e5639f89ee1] ...
	I0912 15:24:41.632046    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5639f89ee1"
	I0912 15:24:41.647531    4867 logs.go:123] Gathering logs for kube-controller-manager [560561b8e67c] ...
	I0912 15:24:41.647545    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560561b8e67c"
	I0912 15:24:41.665284    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:24:41.665293    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:24:41.669917    4867 logs.go:123] Gathering logs for kube-apiserver [7d37fcad59dd] ...
	I0912 15:24:41.669924    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d37fcad59dd"
	I0912 15:24:41.685651    4867 logs.go:123] Gathering logs for coredns [2e0f4e843685] ...
	I0912 15:24:41.685661    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0f4e843685"
	I0912 15:24:41.698352    4867 logs.go:123] Gathering logs for kube-proxy [ff05a057c1bb] ...
	I0912 15:24:41.698362    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff05a057c1bb"
	I0912 15:24:41.709606    4867 logs.go:123] Gathering logs for storage-provisioner [fc6668f65da9] ...
	I0912 15:24:41.709615    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc6668f65da9"
	I0912 15:24:41.721280    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:24:41.721291    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:24:41.744780    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:24:41.744791    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:24:41.756559    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:24:41.756568    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:24:41.772755    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:24:41.772848    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:24:41.793795    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:24:41.793803    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:24:41.793827    4867 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0912 15:24:41.793832    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:24:41.793835    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:24:41.793838    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:24:41.793843    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:24:51.797053    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:24:56.799354    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:24:56.800422    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:24:56.839900    4867 logs.go:276] 1 containers: [7d37fcad59dd]
	I0912 15:24:56.840039    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:24:56.868377    4867 logs.go:276] 1 containers: [382510bca053]
	I0912 15:24:56.868492    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:24:56.883295    4867 logs.go:276] 2 containers: [25b2c07dd7e1 2e0f4e843685]
	I0912 15:24:56.883368    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:24:56.894652    4867 logs.go:276] 1 containers: [9e5639f89ee1]
	I0912 15:24:56.894724    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:24:56.905190    4867 logs.go:276] 1 containers: [ff05a057c1bb]
	I0912 15:24:56.905267    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:24:56.921778    4867 logs.go:276] 1 containers: [560561b8e67c]
	I0912 15:24:56.921848    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:24:56.932285    4867 logs.go:276] 0 containers: []
	W0912 15:24:56.932293    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:24:56.932344    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:24:56.942709    4867 logs.go:276] 1 containers: [fc6668f65da9]
	I0912 15:24:56.942723    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:24:56.942729    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:24:56.958158    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:24:56.958248    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:24:56.978633    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:24:56.978638    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:24:56.982760    4867 logs.go:123] Gathering logs for kube-apiserver [7d37fcad59dd] ...
	I0912 15:24:56.982769    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d37fcad59dd"
	I0912 15:24:56.997825    4867 logs.go:123] Gathering logs for etcd [382510bca053] ...
	I0912 15:24:56.997838    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 382510bca053"
	I0912 15:24:57.012122    4867 logs.go:123] Gathering logs for coredns [2e0f4e843685] ...
	I0912 15:24:57.012131    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0f4e843685"
	I0912 15:24:57.023703    4867 logs.go:123] Gathering logs for storage-provisioner [fc6668f65da9] ...
	I0912 15:24:57.023713    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc6668f65da9"
	I0912 15:24:57.035076    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:24:57.035087    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:24:57.047027    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:24:57.047041    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:24:57.082618    4867 logs.go:123] Gathering logs for coredns [25b2c07dd7e1] ...
	I0912 15:24:57.082631    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b2c07dd7e1"
	I0912 15:24:57.094200    4867 logs.go:123] Gathering logs for kube-scheduler [9e5639f89ee1] ...
	I0912 15:24:57.094211    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5639f89ee1"
	I0912 15:24:57.108521    4867 logs.go:123] Gathering logs for kube-proxy [ff05a057c1bb] ...
	I0912 15:24:57.108531    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff05a057c1bb"
	I0912 15:24:57.125085    4867 logs.go:123] Gathering logs for kube-controller-manager [560561b8e67c] ...
	I0912 15:24:57.125097    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560561b8e67c"
	I0912 15:24:57.145003    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:24:57.145016    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:24:57.168422    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:24:57.168430    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:24:57.168455    4867 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0912 15:24:57.168460    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:24:57.168462    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:24:57.168466    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:24:57.168469    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:25:07.172605    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:25:12.173579    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:25:12.173998    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:25:12.214218    4867 logs.go:276] 1 containers: [7d37fcad59dd]
	I0912 15:25:12.214346    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:25:12.235831    4867 logs.go:276] 1 containers: [382510bca053]
	I0912 15:25:12.235942    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:25:12.251797    4867 logs.go:276] 2 containers: [25b2c07dd7e1 2e0f4e843685]
	I0912 15:25:12.251873    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:25:12.264657    4867 logs.go:276] 1 containers: [9e5639f89ee1]
	I0912 15:25:12.264729    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:25:12.275706    4867 logs.go:276] 1 containers: [ff05a057c1bb]
	I0912 15:25:12.275767    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:25:12.286285    4867 logs.go:276] 1 containers: [560561b8e67c]
	I0912 15:25:12.286354    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:25:12.296727    4867 logs.go:276] 0 containers: []
	W0912 15:25:12.296739    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:25:12.296799    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:25:12.307769    4867 logs.go:276] 1 containers: [fc6668f65da9]
	I0912 15:25:12.307784    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:25:12.307790    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:25:12.321680    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:25:12.321694    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:25:12.357276    4867 logs.go:123] Gathering logs for etcd [382510bca053] ...
	I0912 15:25:12.357287    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 382510bca053"
	I0912 15:25:12.372165    4867 logs.go:123] Gathering logs for coredns [25b2c07dd7e1] ...
	I0912 15:25:12.372177    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b2c07dd7e1"
	I0912 15:25:12.384505    4867 logs.go:123] Gathering logs for kube-proxy [ff05a057c1bb] ...
	I0912 15:25:12.384519    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff05a057c1bb"
	I0912 15:25:12.396073    4867 logs.go:123] Gathering logs for kube-controller-manager [560561b8e67c] ...
	I0912 15:25:12.396086    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560561b8e67c"
	I0912 15:25:12.413527    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:25:12.413537    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:25:12.437882    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:25:12.437891    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:25:12.454528    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:25:12.454618    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:25:12.475139    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:25:12.475144    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:25:12.479659    4867 logs.go:123] Gathering logs for kube-apiserver [7d37fcad59dd] ...
	I0912 15:25:12.479668    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d37fcad59dd"
	I0912 15:25:12.494758    4867 logs.go:123] Gathering logs for coredns [2e0f4e843685] ...
	I0912 15:25:12.494767    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0f4e843685"
	I0912 15:25:12.512115    4867 logs.go:123] Gathering logs for kube-scheduler [9e5639f89ee1] ...
	I0912 15:25:12.512130    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5639f89ee1"
	I0912 15:25:12.526803    4867 logs.go:123] Gathering logs for storage-provisioner [fc6668f65da9] ...
	I0912 15:25:12.526815    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc6668f65da9"
	I0912 15:25:12.537840    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:25:12.537853    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:25:12.537877    4867 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0912 15:25:12.537883    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:25:12.537890    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:25:12.537894    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:25:12.537898    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:25:22.541888    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:25:27.544389    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:25:27.544662    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:25:27.572687    4867 logs.go:276] 1 containers: [7d37fcad59dd]
	I0912 15:25:27.572799    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:25:27.589982    4867 logs.go:276] 1 containers: [382510bca053]
	I0912 15:25:27.590062    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:25:27.603153    4867 logs.go:276] 2 containers: [25b2c07dd7e1 2e0f4e843685]
	I0912 15:25:27.603223    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:25:27.614056    4867 logs.go:276] 1 containers: [9e5639f89ee1]
	I0912 15:25:27.614114    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:25:27.624171    4867 logs.go:276] 1 containers: [ff05a057c1bb]
	I0912 15:25:27.624235    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:25:27.634467    4867 logs.go:276] 1 containers: [560561b8e67c]
	I0912 15:25:27.634536    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:25:27.647336    4867 logs.go:276] 0 containers: []
	W0912 15:25:27.647347    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:25:27.647400    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:25:27.657656    4867 logs.go:276] 1 containers: [fc6668f65da9]
	I0912 15:25:27.657670    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:25:27.657675    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:25:27.669122    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:25:27.669132    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:25:27.684444    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:25:27.684536    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:25:27.705145    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:25:27.705150    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:25:27.709011    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:25:27.709020    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:25:27.743794    4867 logs.go:123] Gathering logs for coredns [2e0f4e843685] ...
	I0912 15:25:27.743807    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0f4e843685"
	I0912 15:25:27.755915    4867 logs.go:123] Gathering logs for kube-controller-manager [560561b8e67c] ...
	I0912 15:25:27.755928    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560561b8e67c"
	I0912 15:25:27.773597    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:25:27.773607    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:25:27.798110    4867 logs.go:123] Gathering logs for kube-apiserver [7d37fcad59dd] ...
	I0912 15:25:27.798120    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d37fcad59dd"
	I0912 15:25:27.812562    4867 logs.go:123] Gathering logs for etcd [382510bca053] ...
	I0912 15:25:27.812572    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 382510bca053"
	I0912 15:25:27.826415    4867 logs.go:123] Gathering logs for coredns [25b2c07dd7e1] ...
	I0912 15:25:27.826427    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b2c07dd7e1"
	I0912 15:25:27.838239    4867 logs.go:123] Gathering logs for kube-scheduler [9e5639f89ee1] ...
	I0912 15:25:27.838251    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5639f89ee1"
	I0912 15:25:27.852960    4867 logs.go:123] Gathering logs for kube-proxy [ff05a057c1bb] ...
	I0912 15:25:27.852972    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff05a057c1bb"
	I0912 15:25:27.864508    4867 logs.go:123] Gathering logs for storage-provisioner [fc6668f65da9] ...
	I0912 15:25:27.864521    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc6668f65da9"
	I0912 15:25:27.882697    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:25:27.882711    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:25:27.882737    4867 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0912 15:25:27.882741    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:25:27.882745    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:25:27.882748    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:25:27.882751    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:25:37.886816    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:25:42.889561    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:25:42.890094    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:25:42.924971    4867 logs.go:276] 1 containers: [7d37fcad59dd]
	I0912 15:25:42.925112    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:25:42.945664    4867 logs.go:276] 1 containers: [382510bca053]
	I0912 15:25:42.945779    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:25:42.959253    4867 logs.go:276] 2 containers: [25b2c07dd7e1 2e0f4e843685]
	I0912 15:25:42.959327    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:25:42.971335    4867 logs.go:276] 1 containers: [9e5639f89ee1]
	I0912 15:25:42.971403    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:25:42.982266    4867 logs.go:276] 1 containers: [ff05a057c1bb]
	I0912 15:25:42.982331    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:25:42.992858    4867 logs.go:276] 1 containers: [560561b8e67c]
	I0912 15:25:42.992924    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:25:43.002703    4867 logs.go:276] 0 containers: []
	W0912 15:25:43.002713    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:25:43.002764    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:25:43.016544    4867 logs.go:276] 1 containers: [fc6668f65da9]
	I0912 15:25:43.016559    4867 logs.go:123] Gathering logs for coredns [25b2c07dd7e1] ...
	I0912 15:25:43.016564    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b2c07dd7e1"
	I0912 15:25:43.028031    4867 logs.go:123] Gathering logs for coredns [2e0f4e843685] ...
	I0912 15:25:43.028043    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0f4e843685"
	I0912 15:25:43.044581    4867 logs.go:123] Gathering logs for kube-proxy [ff05a057c1bb] ...
	I0912 15:25:43.044593    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff05a057c1bb"
	I0912 15:25:43.056306    4867 logs.go:123] Gathering logs for kube-controller-manager [560561b8e67c] ...
	I0912 15:25:43.056319    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560561b8e67c"
	I0912 15:25:43.073107    4867 logs.go:123] Gathering logs for storage-provisioner [fc6668f65da9] ...
	I0912 15:25:43.073120    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc6668f65da9"
	I0912 15:25:43.088976    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:25:43.088988    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:25:43.101098    4867 logs.go:123] Gathering logs for kube-apiserver [7d37fcad59dd] ...
	I0912 15:25:43.101112    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d37fcad59dd"
	I0912 15:25:43.115224    4867 logs.go:123] Gathering logs for etcd [382510bca053] ...
	I0912 15:25:43.115238    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 382510bca053"
	I0912 15:25:43.130696    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:25:43.130708    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:25:43.165891    4867 logs.go:123] Gathering logs for kube-scheduler [9e5639f89ee1] ...
	I0912 15:25:43.165901    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5639f89ee1"
	I0912 15:25:43.186366    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:25:43.186376    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:25:43.211237    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:25:43.211245    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:25:43.228268    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:25:43.228360    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:25:43.248859    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:25:43.248866    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:25:43.252926    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:25:43.252935    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:25:43.252958    4867 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0912 15:25:43.252962    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:25:43.252966    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:25:43.252969    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:25:43.252971    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:25:53.257006    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:25:58.257314    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:25:58.257421    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:25:58.272229    4867 logs.go:276] 1 containers: [7d37fcad59dd]
	I0912 15:25:58.272301    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:25:58.284613    4867 logs.go:276] 1 containers: [382510bca053]
	I0912 15:25:58.284677    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:25:58.296079    4867 logs.go:276] 4 containers: [4b7aad71a1f1 2a8315376a47 25b2c07dd7e1 2e0f4e843685]
	I0912 15:25:58.296150    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:25:58.307031    4867 logs.go:276] 1 containers: [9e5639f89ee1]
	I0912 15:25:58.307094    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:25:58.322611    4867 logs.go:276] 1 containers: [ff05a057c1bb]
	I0912 15:25:58.322677    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:25:58.334178    4867 logs.go:276] 1 containers: [560561b8e67c]
	I0912 15:25:58.334237    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:25:58.345089    4867 logs.go:276] 0 containers: []
	W0912 15:25:58.345099    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:25:58.345148    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:25:58.358990    4867 logs.go:276] 1 containers: [fc6668f65da9]
	I0912 15:25:58.359012    4867 logs.go:123] Gathering logs for etcd [382510bca053] ...
	I0912 15:25:58.359017    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 382510bca053"
	I0912 15:25:58.373169    4867 logs.go:123] Gathering logs for kube-scheduler [9e5639f89ee1] ...
	I0912 15:25:58.373180    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5639f89ee1"
	I0912 15:25:58.394367    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:25:58.394380    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:25:58.406397    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:25:58.406409    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:25:58.410623    4867 logs.go:123] Gathering logs for coredns [2a8315376a47] ...
	I0912 15:25:58.410628    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a8315376a47"
	I0912 15:25:58.422278    4867 logs.go:123] Gathering logs for storage-provisioner [fc6668f65da9] ...
	I0912 15:25:58.422290    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc6668f65da9"
	I0912 15:25:58.434124    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:25:58.434133    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:25:58.469427    4867 logs.go:123] Gathering logs for kube-proxy [ff05a057c1bb] ...
	I0912 15:25:58.469438    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff05a057c1bb"
	I0912 15:25:58.481615    4867 logs.go:123] Gathering logs for kube-controller-manager [560561b8e67c] ...
	I0912 15:25:58.481630    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560561b8e67c"
	I0912 15:25:58.499285    4867 logs.go:123] Gathering logs for coredns [25b2c07dd7e1] ...
	I0912 15:25:58.499295    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b2c07dd7e1"
	I0912 15:25:58.511170    4867 logs.go:123] Gathering logs for kube-apiserver [7d37fcad59dd] ...
	I0912 15:25:58.511182    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d37fcad59dd"
	I0912 15:25:58.526499    4867 logs.go:123] Gathering logs for coredns [4b7aad71a1f1] ...
	I0912 15:25:58.526512    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b7aad71a1f1"
	I0912 15:25:58.539570    4867 logs.go:123] Gathering logs for coredns [2e0f4e843685] ...
	I0912 15:25:58.539581    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0f4e843685"
	I0912 15:25:58.551209    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:25:58.551222    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:25:58.576700    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:25:58.576711    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:25:58.594355    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:25:58.594447    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:25:58.614839    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:25:58.614845    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:25:58.614867    4867 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0912 15:25:58.614871    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:25:58.614883    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:25:58.614887    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:25:58.614891    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:26:08.618876    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:26:13.619910    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:26:13.620005    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:26:13.638815    4867 logs.go:276] 1 containers: [7d37fcad59dd]
	I0912 15:26:13.638893    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:26:13.651204    4867 logs.go:276] 1 containers: [382510bca053]
	I0912 15:26:13.651264    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:26:13.663443    4867 logs.go:276] 4 containers: [4b7aad71a1f1 2a8315376a47 25b2c07dd7e1 2e0f4e843685]
	I0912 15:26:13.663502    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:26:13.676759    4867 logs.go:276] 1 containers: [9e5639f89ee1]
	I0912 15:26:13.676831    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:26:13.689661    4867 logs.go:276] 1 containers: [ff05a057c1bb]
	I0912 15:26:13.689720    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:26:13.701288    4867 logs.go:276] 1 containers: [560561b8e67c]
	I0912 15:26:13.701356    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:26:13.714063    4867 logs.go:276] 0 containers: []
	W0912 15:26:13.714075    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:26:13.714129    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:26:13.726683    4867 logs.go:276] 1 containers: [fc6668f65da9]
	I0912 15:26:13.726698    4867 logs.go:123] Gathering logs for kube-apiserver [7d37fcad59dd] ...
	I0912 15:26:13.726703    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d37fcad59dd"
	I0912 15:26:13.742131    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:26:13.742147    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:26:13.780405    4867 logs.go:123] Gathering logs for coredns [4b7aad71a1f1] ...
	I0912 15:26:13.780413    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b7aad71a1f1"
	I0912 15:26:13.793501    4867 logs.go:123] Gathering logs for coredns [2a8315376a47] ...
	I0912 15:26:13.793513    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a8315376a47"
	I0912 15:26:13.806013    4867 logs.go:123] Gathering logs for coredns [25b2c07dd7e1] ...
	I0912 15:26:13.806024    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b2c07dd7e1"
	I0912 15:26:13.818719    4867 logs.go:123] Gathering logs for kube-scheduler [9e5639f89ee1] ...
	I0912 15:26:13.818734    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5639f89ee1"
	I0912 15:26:13.835080    4867 logs.go:123] Gathering logs for kube-proxy [ff05a057c1bb] ...
	I0912 15:26:13.835100    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff05a057c1bb"
	I0912 15:26:13.848345    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:26:13.848354    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:26:13.861217    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:26:13.861228    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:26:13.865517    4867 logs.go:123] Gathering logs for kube-controller-manager [560561b8e67c] ...
	I0912 15:26:13.865531    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560561b8e67c"
	I0912 15:26:13.884954    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:26:13.884968    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:26:13.909564    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:26:13.909580    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:26:13.925895    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:26:13.925996    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:26:13.947360    4867 logs.go:123] Gathering logs for etcd [382510bca053] ...
	I0912 15:26:13.947383    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 382510bca053"
	I0912 15:26:13.963252    4867 logs.go:123] Gathering logs for coredns [2e0f4e843685] ...
	I0912 15:26:13.963267    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0f4e843685"
	I0912 15:26:13.976247    4867 logs.go:123] Gathering logs for storage-provisioner [fc6668f65da9] ...
	I0912 15:26:13.976259    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc6668f65da9"
	I0912 15:26:13.993453    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:26:13.993464    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:26:13.993491    4867 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0912 15:26:13.993496    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:26:13.993501    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:26:13.993513    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:26:13.993516    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:26:23.997148    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:26:28.998547    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:26:28.998871    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:26:29.027478    4867 logs.go:276] 1 containers: [7d37fcad59dd]
	I0912 15:26:29.027590    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:26:29.046023    4867 logs.go:276] 1 containers: [382510bca053]
	I0912 15:26:29.046100    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:26:29.060250    4867 logs.go:276] 4 containers: [4b7aad71a1f1 2a8315376a47 25b2c07dd7e1 2e0f4e843685]
	I0912 15:26:29.060322    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:26:29.077888    4867 logs.go:276] 1 containers: [9e5639f89ee1]
	I0912 15:26:29.077940    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:26:29.088726    4867 logs.go:276] 1 containers: [ff05a057c1bb]
	I0912 15:26:29.088791    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:26:29.099033    4867 logs.go:276] 1 containers: [560561b8e67c]
	I0912 15:26:29.099099    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:26:29.108849    4867 logs.go:276] 0 containers: []
	W0912 15:26:29.108859    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:26:29.108906    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:26:29.119385    4867 logs.go:276] 1 containers: [fc6668f65da9]
	I0912 15:26:29.119401    4867 logs.go:123] Gathering logs for coredns [4b7aad71a1f1] ...
	I0912 15:26:29.119407    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b7aad71a1f1"
	I0912 15:26:29.132527    4867 logs.go:123] Gathering logs for coredns [2a8315376a47] ...
	I0912 15:26:29.132536    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a8315376a47"
	I0912 15:26:29.144040    4867 logs.go:123] Gathering logs for etcd [382510bca053] ...
	I0912 15:26:29.144048    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 382510bca053"
	I0912 15:26:29.157609    4867 logs.go:123] Gathering logs for kube-apiserver [7d37fcad59dd] ...
	I0912 15:26:29.157619    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d37fcad59dd"
	I0912 15:26:29.172958    4867 logs.go:123] Gathering logs for coredns [25b2c07dd7e1] ...
	I0912 15:26:29.172971    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b2c07dd7e1"
	I0912 15:26:29.184873    4867 logs.go:123] Gathering logs for coredns [2e0f4e843685] ...
	I0912 15:26:29.184888    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0f4e843685"
	I0912 15:26:29.196296    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:26:29.196306    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:26:29.219888    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:26:29.219895    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:26:29.224413    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:26:29.224420    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:26:29.259176    4867 logs.go:123] Gathering logs for kube-scheduler [9e5639f89ee1] ...
	I0912 15:26:29.259190    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5639f89ee1"
	I0912 15:26:29.274442    4867 logs.go:123] Gathering logs for kube-proxy [ff05a057c1bb] ...
	I0912 15:26:29.274483    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff05a057c1bb"
	I0912 15:26:29.286221    4867 logs.go:123] Gathering logs for kube-controller-manager [560561b8e67c] ...
	I0912 15:26:29.286229    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560561b8e67c"
	I0912 15:26:29.303878    4867 logs.go:123] Gathering logs for storage-provisioner [fc6668f65da9] ...
	I0912 15:26:29.303887    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc6668f65da9"
	I0912 15:26:29.315149    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:26:29.315159    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:26:29.327049    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:26:29.327063    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:26:29.344854    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:26:29.344946    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:26:29.365165    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:26:29.365172    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:26:29.365196    4867 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0912 15:26:29.365200    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:26:29.365203    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:26:29.365205    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:26:29.365207    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:26:39.368251    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:26:44.370479    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:26:44.370918    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:26:44.410310    4867 logs.go:276] 1 containers: [7d37fcad59dd]
	I0912 15:26:44.410450    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:26:44.432379    4867 logs.go:276] 1 containers: [382510bca053]
	I0912 15:26:44.432483    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:26:44.447617    4867 logs.go:276] 4 containers: [4b7aad71a1f1 2a8315376a47 25b2c07dd7e1 2e0f4e843685]
	I0912 15:26:44.447696    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:26:44.459836    4867 logs.go:276] 1 containers: [9e5639f89ee1]
	I0912 15:26:44.459899    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:26:44.470570    4867 logs.go:276] 1 containers: [ff05a057c1bb]
	I0912 15:26:44.470641    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:26:44.481666    4867 logs.go:276] 1 containers: [560561b8e67c]
	I0912 15:26:44.481738    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:26:44.491698    4867 logs.go:276] 0 containers: []
	W0912 15:26:44.491716    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:26:44.491770    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:26:44.503700    4867 logs.go:276] 1 containers: [fc6668f65da9]
	I0912 15:26:44.503716    4867 logs.go:123] Gathering logs for storage-provisioner [fc6668f65da9] ...
	I0912 15:26:44.503722    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc6668f65da9"
	I0912 15:26:44.515422    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:26:44.515434    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:26:44.549085    4867 logs.go:123] Gathering logs for coredns [4b7aad71a1f1] ...
	I0912 15:26:44.549098    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b7aad71a1f1"
	I0912 15:26:44.560645    4867 logs.go:123] Gathering logs for coredns [2a8315376a47] ...
	I0912 15:26:44.560656    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a8315376a47"
	I0912 15:26:44.572240    4867 logs.go:123] Gathering logs for coredns [25b2c07dd7e1] ...
	I0912 15:26:44.572252    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b2c07dd7e1"
	I0912 15:26:44.585112    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:26:44.585125    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:26:44.596620    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:26:44.596634    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:26:44.600643    4867 logs.go:123] Gathering logs for etcd [382510bca053] ...
	I0912 15:26:44.600650    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 382510bca053"
	I0912 15:26:44.613949    4867 logs.go:123] Gathering logs for kube-scheduler [9e5639f89ee1] ...
	I0912 15:26:44.613962    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5639f89ee1"
	I0912 15:26:44.628428    4867 logs.go:123] Gathering logs for kube-apiserver [7d37fcad59dd] ...
	I0912 15:26:44.628439    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d37fcad59dd"
	I0912 15:26:44.642892    4867 logs.go:123] Gathering logs for coredns [2e0f4e843685] ...
	I0912 15:26:44.642901    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0f4e843685"
	I0912 15:26:44.659327    4867 logs.go:123] Gathering logs for kube-proxy [ff05a057c1bb] ...
	I0912 15:26:44.659340    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff05a057c1bb"
	I0912 15:26:44.671571    4867 logs.go:123] Gathering logs for kube-controller-manager [560561b8e67c] ...
	I0912 15:26:44.671582    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560561b8e67c"
	I0912 15:26:44.688888    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:26:44.688898    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:26:44.712334    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:26:44.712342    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:26:44.729699    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:26:44.729791    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:26:44.750500    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:26:44.750506    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:26:44.750531    4867 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0912 15:26:44.750535    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:26:44.750538    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:26:44.750541    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:26:44.750543    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:26:54.754549    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:26:59.755283    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:26:59.755347    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:26:59.766611    4867 logs.go:276] 1 containers: [7d37fcad59dd]
	I0912 15:26:59.766676    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:26:59.777775    4867 logs.go:276] 1 containers: [382510bca053]
	I0912 15:26:59.777817    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:26:59.789072    4867 logs.go:276] 4 containers: [4b7aad71a1f1 2a8315376a47 25b2c07dd7e1 2e0f4e843685]
	I0912 15:26:59.789127    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:26:59.800080    4867 logs.go:276] 1 containers: [9e5639f89ee1]
	I0912 15:26:59.800152    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:26:59.810254    4867 logs.go:276] 1 containers: [ff05a057c1bb]
	I0912 15:26:59.810316    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:26:59.820658    4867 logs.go:276] 1 containers: [560561b8e67c]
	I0912 15:26:59.820717    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:26:59.830882    4867 logs.go:276] 0 containers: []
	W0912 15:26:59.830892    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:26:59.830946    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:26:59.840709    4867 logs.go:276] 1 containers: [fc6668f65da9]
	I0912 15:26:59.840734    4867 logs.go:123] Gathering logs for etcd [382510bca053] ...
	I0912 15:26:59.840742    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 382510bca053"
	I0912 15:26:59.854915    4867 logs.go:123] Gathering logs for coredns [25b2c07dd7e1] ...
	I0912 15:26:59.854925    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b2c07dd7e1"
	I0912 15:26:59.871843    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:26:59.871854    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:26:59.876451    4867 logs.go:123] Gathering logs for coredns [4b7aad71a1f1] ...
	I0912 15:26:59.876461    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b7aad71a1f1"
	I0912 15:26:59.888035    4867 logs.go:123] Gathering logs for coredns [2e0f4e843685] ...
	I0912 15:26:59.888045    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0f4e843685"
	I0912 15:26:59.899683    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:26:59.899695    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:26:59.915441    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:26:59.915532    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:26:59.936180    4867 logs.go:123] Gathering logs for kube-proxy [ff05a057c1bb] ...
	I0912 15:26:59.936191    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff05a057c1bb"
	I0912 15:26:59.947753    4867 logs.go:123] Gathering logs for kube-controller-manager [560561b8e67c] ...
	I0912 15:26:59.947764    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560561b8e67c"
	I0912 15:26:59.964880    4867 logs.go:123] Gathering logs for storage-provisioner [fc6668f65da9] ...
	I0912 15:26:59.964889    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc6668f65da9"
	I0912 15:26:59.975973    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:26:59.975984    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:26:59.988155    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:26:59.988170    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:27:00.021392    4867 logs.go:123] Gathering logs for kube-apiserver [7d37fcad59dd] ...
	I0912 15:27:00.021402    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d37fcad59dd"
	I0912 15:27:00.035878    4867 logs.go:123] Gathering logs for coredns [2a8315376a47] ...
	I0912 15:27:00.035891    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a8315376a47"
	I0912 15:27:00.047541    4867 logs.go:123] Gathering logs for kube-scheduler [9e5639f89ee1] ...
	I0912 15:27:00.047555    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5639f89ee1"
	I0912 15:27:00.061926    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:27:00.061937    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:27:00.087654    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:27:00.087665    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:27:00.087691    4867 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0912 15:27:00.087707    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:27:00.087711    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:27:00.087716    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:27:00.087720    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:27:10.090382    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:27:15.091325    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:27:15.091429    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:27:15.102900    4867 logs.go:276] 1 containers: [7d37fcad59dd]
	I0912 15:27:15.102965    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:27:15.113880    4867 logs.go:276] 1 containers: [382510bca053]
	I0912 15:27:15.113967    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:27:15.128654    4867 logs.go:276] 4 containers: [4b7aad71a1f1 2a8315376a47 25b2c07dd7e1 2e0f4e843685]
	I0912 15:27:15.128721    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:27:15.140278    4867 logs.go:276] 1 containers: [9e5639f89ee1]
	I0912 15:27:15.140340    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:27:15.152448    4867 logs.go:276] 1 containers: [ff05a057c1bb]
	I0912 15:27:15.152526    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:27:15.173298    4867 logs.go:276] 1 containers: [560561b8e67c]
	I0912 15:27:15.173360    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:27:15.184893    4867 logs.go:276] 0 containers: []
	W0912 15:27:15.184904    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:27:15.184961    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:27:15.197064    4867 logs.go:276] 1 containers: [fc6668f65da9]
	I0912 15:27:15.197078    4867 logs.go:123] Gathering logs for coredns [4b7aad71a1f1] ...
	I0912 15:27:15.197083    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b7aad71a1f1"
	I0912 15:27:15.210308    4867 logs.go:123] Gathering logs for coredns [25b2c07dd7e1] ...
	I0912 15:27:15.210317    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b2c07dd7e1"
	I0912 15:27:15.223093    4867 logs.go:123] Gathering logs for coredns [2e0f4e843685] ...
	I0912 15:27:15.223104    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0f4e843685"
	I0912 15:27:15.236105    4867 logs.go:123] Gathering logs for kube-apiserver [7d37fcad59dd] ...
	I0912 15:27:15.236115    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d37fcad59dd"
	I0912 15:27:15.253706    4867 logs.go:123] Gathering logs for etcd [382510bca053] ...
	I0912 15:27:15.253716    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 382510bca053"
	I0912 15:27:15.268044    4867 logs.go:123] Gathering logs for coredns [2a8315376a47] ...
	I0912 15:27:15.268057    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a8315376a47"
	I0912 15:27:15.281616    4867 logs.go:123] Gathering logs for kube-scheduler [9e5639f89ee1] ...
	I0912 15:27:15.281628    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5639f89ee1"
	I0912 15:27:15.297735    4867 logs.go:123] Gathering logs for kube-controller-manager [560561b8e67c] ...
	I0912 15:27:15.297744    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560561b8e67c"
	I0912 15:27:15.315810    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:27:15.315822    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:27:15.341788    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:27:15.341797    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:27:15.346265    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:27:15.346274    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:27:15.359348    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:27:15.359358    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:27:15.396914    4867 logs.go:123] Gathering logs for kube-proxy [ff05a057c1bb] ...
	I0912 15:27:15.396929    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff05a057c1bb"
	I0912 15:27:15.410326    4867 logs.go:123] Gathering logs for storage-provisioner [fc6668f65da9] ...
	I0912 15:27:15.410336    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc6668f65da9"
	I0912 15:27:15.424603    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:27:15.424612    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:27:15.440359    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:27:15.440459    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:27:15.462214    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:27:15.462228    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:27:15.462257    4867 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0912 15:27:15.462263    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:27:15.462266    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:27:15.462270    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:27:15.462272    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:27:25.465857    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:27:30.468309    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:27:30.468777    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0912 15:27:30.508709    4867 logs.go:276] 1 containers: [7d37fcad59dd]
	I0912 15:27:30.508836    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0912 15:27:30.529801    4867 logs.go:276] 1 containers: [382510bca053]
	I0912 15:27:30.529892    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0912 15:27:30.548638    4867 logs.go:276] 4 containers: [4b7aad71a1f1 2a8315376a47 25b2c07dd7e1 2e0f4e843685]
	I0912 15:27:30.548719    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0912 15:27:30.560775    4867 logs.go:276] 1 containers: [9e5639f89ee1]
	I0912 15:27:30.560843    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0912 15:27:30.571742    4867 logs.go:276] 1 containers: [ff05a057c1bb]
	I0912 15:27:30.571801    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0912 15:27:30.582428    4867 logs.go:276] 1 containers: [560561b8e67c]
	I0912 15:27:30.582493    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0912 15:27:30.593658    4867 logs.go:276] 0 containers: []
	W0912 15:27:30.593667    4867 logs.go:278] No container was found matching "kindnet"
	I0912 15:27:30.593721    4867 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0912 15:27:30.604327    4867 logs.go:276] 1 containers: [fc6668f65da9]
	I0912 15:27:30.604342    4867 logs.go:123] Gathering logs for dmesg ...
	I0912 15:27:30.604348    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 15:27:30.608649    4867 logs.go:123] Gathering logs for etcd [382510bca053] ...
	I0912 15:27:30.608657    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 382510bca053"
	I0912 15:27:30.632934    4867 logs.go:123] Gathering logs for coredns [2a8315376a47] ...
	I0912 15:27:30.632946    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a8315376a47"
	I0912 15:27:30.644209    4867 logs.go:123] Gathering logs for coredns [2e0f4e843685] ...
	I0912 15:27:30.644221    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0f4e843685"
	I0912 15:27:30.655625    4867 logs.go:123] Gathering logs for kube-apiserver [7d37fcad59dd] ...
	I0912 15:27:30.655636    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d37fcad59dd"
	I0912 15:27:30.670451    4867 logs.go:123] Gathering logs for coredns [4b7aad71a1f1] ...
	I0912 15:27:30.670463    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b7aad71a1f1"
	I0912 15:27:30.683950    4867 logs.go:123] Gathering logs for storage-provisioner [fc6668f65da9] ...
	I0912 15:27:30.683960    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc6668f65da9"
	I0912 15:27:30.694851    4867 logs.go:123] Gathering logs for kubelet ...
	I0912 15:27:30.694861    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 15:27:30.710487    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:27:30.710580    4867 logs.go:138] Found kubelet problem: Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:27:30.731029    4867 logs.go:123] Gathering logs for kube-scheduler [9e5639f89ee1] ...
	I0912 15:27:30.731034    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5639f89ee1"
	I0912 15:27:30.745741    4867 logs.go:123] Gathering logs for kube-controller-manager [560561b8e67c] ...
	I0912 15:27:30.745754    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 560561b8e67c"
	I0912 15:27:30.768228    4867 logs.go:123] Gathering logs for container status ...
	I0912 15:27:30.768239    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 15:27:30.780468    4867 logs.go:123] Gathering logs for describe nodes ...
	I0912 15:27:30.780479    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 15:27:30.814319    4867 logs.go:123] Gathering logs for coredns [25b2c07dd7e1] ...
	I0912 15:27:30.814333    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b2c07dd7e1"
	I0912 15:27:30.826108    4867 logs.go:123] Gathering logs for kube-proxy [ff05a057c1bb] ...
	I0912 15:27:30.826121    4867 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff05a057c1bb"
	I0912 15:27:30.838206    4867 logs.go:123] Gathering logs for Docker ...
	I0912 15:27:30.838219    4867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0912 15:27:30.862506    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:27:30.862516    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 15:27:30.862541    4867 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0912 15:27:30.862546    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: W0912 22:19:44.493162    1656 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	W0912 15:27:30.862549    4867 out.go:270]   Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	  Sep 12 22:19:44 stopped-upgrade-841000 kubelet[1656]: E0912 22:19:44.493201    1656 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-841000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-841000' and this object
	I0912 15:27:30.862553    4867 out.go:358] Setting ErrFile to fd 2...
	I0912 15:27:30.862556    4867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:27:40.865756    4867 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0912 15:27:45.868522    4867 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 15:27:45.875740    4867 out.go:201] 
	W0912 15:27:45.879698    4867 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0912 15:27:45.879730    4867 out.go:270] * 
	* 
	W0912 15:27:45.882268    4867 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:27:45.893638    4867 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-841000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (581.51s)

                                                
                                    
x
+
TestPause/serial/Start (9.92s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-044000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
E0912 15:24:55.800361    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/functional-384000/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-044000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.86262325s)

                                                
                                                
-- stdout --
	* [pause-044000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-044000" primary control-plane node in "pause-044000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-044000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-044000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-044000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-044000 -n pause-044000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-044000 -n pause-044000: exit status 7 (56.06925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-044000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-190000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-190000 --driver=qemu2 : exit status 80 (9.968484542s)

                                                
                                                
-- stdout --
	* [NoKubernetes-190000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-190000" primary control-plane node in "NoKubernetes-190000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-190000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-190000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-190000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-190000 -n NoKubernetes-190000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-190000 -n NoKubernetes-190000: exit status 7 (50.88625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-190000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-190000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-190000 --no-kubernetes --driver=qemu2 : exit status 80 (5.250040333s)

                                                
                                                
-- stdout --
	* [NoKubernetes-190000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-190000
	* Restarting existing qemu2 VM for "NoKubernetes-190000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-190000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-190000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-190000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-190000 -n NoKubernetes-190000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-190000 -n NoKubernetes-190000: exit status 7 (42.731917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-190000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-190000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-190000 --no-kubernetes --driver=qemu2 : exit status 80 (5.246504541s)

                                                
                                                
-- stdout --
	* [NoKubernetes-190000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-190000
	* Restarting existing qemu2 VM for "NoKubernetes-190000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-190000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-190000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-190000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-190000 -n NoKubernetes-190000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-190000 -n NoKubernetes-190000: exit status 7 (64.2365ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-190000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-190000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-190000 --driver=qemu2 : exit status 80 (5.26038525s)

                                                
                                                
-- stdout --
	* [NoKubernetes-190000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-190000
	* Restarting existing qemu2 VM for "NoKubernetes-190000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-190000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-190000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-190000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-190000 -n NoKubernetes-190000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-190000 -n NoKubernetes-190000: exit status 7 (29.120792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-190000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-237000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-237000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.8944545s)

                                                
                                                
-- stdout --
	* [auto-237000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-237000" primary control-plane node in "auto-237000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-237000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:26:03.464916    5164 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:26:03.465050    5164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:26:03.465054    5164 out.go:358] Setting ErrFile to fd 2...
	I0912 15:26:03.465056    5164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:26:03.465174    5164 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:26:03.466191    5164 out.go:352] Setting JSON to false
	I0912 15:26:03.482757    5164 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5127,"bootTime":1726174836,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:26:03.482857    5164 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:26:03.488773    5164 out.go:177] * [auto-237000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:26:03.496561    5164 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:26:03.496597    5164 notify.go:220] Checking for updates...
	I0912 15:26:03.500532    5164 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:26:03.503573    5164 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:26:03.505172    5164 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:26:03.508516    5164 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:26:03.511526    5164 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:26:03.519641    5164 config.go:182] Loaded profile config "multinode-323000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:26:03.519703    5164 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0912 15:26:03.519748    5164 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:26:03.524587    5164 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:26:03.531369    5164 start.go:297] selected driver: qemu2
	I0912 15:26:03.531377    5164 start.go:901] validating driver "qemu2" against <nil>
	I0912 15:26:03.531382    5164 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:26:03.533697    5164 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 15:26:03.536587    5164 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:26:03.539631    5164 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:26:03.539659    5164 cni.go:84] Creating CNI manager for ""
	I0912 15:26:03.539669    5164 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:26:03.539678    5164 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 15:26:03.539716    5164 start.go:340] cluster config:
	{Name:auto-237000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:26:03.543253    5164 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:26:03.550502    5164 out.go:177] * Starting "auto-237000" primary control-plane node in "auto-237000" cluster
	I0912 15:26:03.554565    5164 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 15:26:03.554577    5164 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 15:26:03.554584    5164 cache.go:56] Caching tarball of preloaded images
	I0912 15:26:03.554632    5164 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:26:03.554636    5164 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 15:26:03.554685    5164 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/auto-237000/config.json ...
	I0912 15:26:03.554694    5164 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/auto-237000/config.json: {Name:mk89149a432837812d489aba9b5630b4b129934e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:26:03.555104    5164 start.go:360] acquireMachinesLock for auto-237000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:26:03.555133    5164 start.go:364] duration metric: took 24.917µs to acquireMachinesLock for "auto-237000"
	I0912 15:26:03.555144    5164 start.go:93] Provisioning new machine with config: &{Name:auto-237000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:26:03.555175    5164 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:26:03.563552    5164 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:26:03.578689    5164 start.go:159] libmachine.API.Create for "auto-237000" (driver="qemu2")
	I0912 15:26:03.578716    5164 client.go:168] LocalClient.Create starting
	I0912 15:26:03.578786    5164 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:26:03.578820    5164 main.go:141] libmachine: Decoding PEM data...
	I0912 15:26:03.578831    5164 main.go:141] libmachine: Parsing certificate...
	I0912 15:26:03.578880    5164 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:26:03.578910    5164 main.go:141] libmachine: Decoding PEM data...
	I0912 15:26:03.578916    5164 main.go:141] libmachine: Parsing certificate...
	I0912 15:26:03.579339    5164 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:26:03.757328    5164 main.go:141] libmachine: Creating SSH key...
	I0912 15:26:03.838793    5164 main.go:141] libmachine: Creating Disk image...
	I0912 15:26:03.838798    5164 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:26:03.839019    5164 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/auto-237000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/auto-237000/disk.qcow2
	I0912 15:26:03.848567    5164 main.go:141] libmachine: STDOUT: 
	I0912 15:26:03.848585    5164 main.go:141] libmachine: STDERR: 
	I0912 15:26:03.848645    5164 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/auto-237000/disk.qcow2 +20000M
	I0912 15:26:03.856614    5164 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:26:03.856636    5164 main.go:141] libmachine: STDERR: 
	I0912 15:26:03.856650    5164 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/auto-237000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/auto-237000/disk.qcow2
	I0912 15:26:03.856657    5164 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:26:03.856670    5164 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:26:03.856704    5164 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/auto-237000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/auto-237000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/auto-237000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:0d:97:20:60:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/auto-237000/disk.qcow2
	I0912 15:26:03.858326    5164 main.go:141] libmachine: STDOUT: 
	I0912 15:26:03.858358    5164 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:26:03.858378    5164 client.go:171] duration metric: took 279.664209ms to LocalClient.Create
	I0912 15:26:05.860622    5164 start.go:128] duration metric: took 2.305461833s to createHost
	I0912 15:26:05.860714    5164 start.go:83] releasing machines lock for "auto-237000", held for 2.305623625s
	W0912 15:26:05.860760    5164 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:26:05.873025    5164 out.go:177] * Deleting "auto-237000" in qemu2 ...
	W0912 15:26:05.907046    5164 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:26:05.907072    5164 start.go:729] Will try again in 5 seconds ...
	I0912 15:26:10.909212    5164 start.go:360] acquireMachinesLock for auto-237000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:26:10.909833    5164 start.go:364] duration metric: took 490.125µs to acquireMachinesLock for "auto-237000"
	I0912 15:26:10.909978    5164 start.go:93] Provisioning new machine with config: &{Name:auto-237000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:26:10.910225    5164 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:26:10.919799    5164 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:26:10.969416    5164 start.go:159] libmachine.API.Create for "auto-237000" (driver="qemu2")
	I0912 15:26:10.969463    5164 client.go:168] LocalClient.Create starting
	I0912 15:26:10.969579    5164 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:26:10.969641    5164 main.go:141] libmachine: Decoding PEM data...
	I0912 15:26:10.969657    5164 main.go:141] libmachine: Parsing certificate...
	I0912 15:26:10.969725    5164 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:26:10.969771    5164 main.go:141] libmachine: Decoding PEM data...
	I0912 15:26:10.969786    5164 main.go:141] libmachine: Parsing certificate...
	I0912 15:26:10.970281    5164 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:26:11.140603    5164 main.go:141] libmachine: Creating SSH key...
	I0912 15:26:11.259545    5164 main.go:141] libmachine: Creating Disk image...
	I0912 15:26:11.259551    5164 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:26:11.259763    5164 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/auto-237000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/auto-237000/disk.qcow2
	I0912 15:26:11.269318    5164 main.go:141] libmachine: STDOUT: 
	I0912 15:26:11.269338    5164 main.go:141] libmachine: STDERR: 
	I0912 15:26:11.269414    5164 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/auto-237000/disk.qcow2 +20000M
	I0912 15:26:11.277289    5164 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:26:11.277314    5164 main.go:141] libmachine: STDERR: 
	I0912 15:26:11.277327    5164 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/auto-237000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/auto-237000/disk.qcow2
	I0912 15:26:11.277332    5164 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:26:11.277344    5164 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:26:11.277379    5164 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/auto-237000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/auto-237000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/auto-237000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:b8:c1:a9:9f:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/auto-237000/disk.qcow2
	I0912 15:26:11.278991    5164 main.go:141] libmachine: STDOUT: 
	I0912 15:26:11.279012    5164 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:26:11.279026    5164 client.go:171] duration metric: took 309.564875ms to LocalClient.Create
	I0912 15:26:13.281203    5164 start.go:128] duration metric: took 2.370997208s to createHost
	I0912 15:26:13.281339    5164 start.go:83] releasing machines lock for "auto-237000", held for 2.371520709s
	W0912 15:26:13.281804    5164 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-237000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-237000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:26:13.291436    5164 out.go:201] 
	W0912 15:26:13.304705    5164 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:26:13.304816    5164 out.go:270] * 
	* 
	W0912 15:26:13.307840    5164 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:26:13.317464    5164 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-237000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-237000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.83701275s)

                                                
                                                
-- stdout --
	* [kindnet-237000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-237000" primary control-plane node in "kindnet-237000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-237000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:26:15.510444    5278 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:26:15.510578    5278 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:26:15.510581    5278 out.go:358] Setting ErrFile to fd 2...
	I0912 15:26:15.510583    5278 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:26:15.510700    5278 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:26:15.511876    5278 out.go:352] Setting JSON to false
	I0912 15:26:15.528853    5278 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5139,"bootTime":1726174836,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:26:15.528932    5278 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:26:15.535603    5278 out.go:177] * [kindnet-237000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:26:15.544040    5278 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:26:15.544120    5278 notify.go:220] Checking for updates...
	I0912 15:26:15.549476    5278 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:26:15.552442    5278 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:26:15.555480    5278 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:26:15.558433    5278 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:26:15.561463    5278 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:26:15.564744    5278 config.go:182] Loaded profile config "multinode-323000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:26:15.564812    5278 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0912 15:26:15.564852    5278 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:26:15.569369    5278 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:26:15.576427    5278 start.go:297] selected driver: qemu2
	I0912 15:26:15.576432    5278 start.go:901] validating driver "qemu2" against <nil>
	I0912 15:26:15.576438    5278 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:26:15.578638    5278 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 15:26:15.581380    5278 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:26:15.584474    5278 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:26:15.584489    5278 cni.go:84] Creating CNI manager for "kindnet"
	I0912 15:26:15.584492    5278 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0912 15:26:15.584521    5278 start.go:340] cluster config:
	{Name:kindnet-237000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:26:15.588006    5278 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:26:15.593404    5278 out.go:177] * Starting "kindnet-237000" primary control-plane node in "kindnet-237000" cluster
	I0912 15:26:15.597443    5278 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 15:26:15.597459    5278 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 15:26:15.597469    5278 cache.go:56] Caching tarball of preloaded images
	I0912 15:26:15.597538    5278 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:26:15.597545    5278 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 15:26:15.597609    5278 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/kindnet-237000/config.json ...
	I0912 15:26:15.597621    5278 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/kindnet-237000/config.json: {Name:mk2a0de55623b448099dfd2a4b711b69c894aef5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:26:15.597828    5278 start.go:360] acquireMachinesLock for kindnet-237000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:26:15.597857    5278 start.go:364] duration metric: took 23.916µs to acquireMachinesLock for "kindnet-237000"
	I0912 15:26:15.597868    5278 start.go:93] Provisioning new machine with config: &{Name:kindnet-237000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:26:15.597908    5278 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:26:15.606489    5278 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:26:15.621368    5278 start.go:159] libmachine.API.Create for "kindnet-237000" (driver="qemu2")
	I0912 15:26:15.621401    5278 client.go:168] LocalClient.Create starting
	I0912 15:26:15.621467    5278 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:26:15.621499    5278 main.go:141] libmachine: Decoding PEM data...
	I0912 15:26:15.621513    5278 main.go:141] libmachine: Parsing certificate...
	I0912 15:26:15.621556    5278 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:26:15.621581    5278 main.go:141] libmachine: Decoding PEM data...
	I0912 15:26:15.621587    5278 main.go:141] libmachine: Parsing certificate...
	I0912 15:26:15.621920    5278 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:26:15.782515    5278 main.go:141] libmachine: Creating SSH key...
	I0912 15:26:15.890825    5278 main.go:141] libmachine: Creating Disk image...
	I0912 15:26:15.890831    5278 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:26:15.891037    5278 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kindnet-237000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kindnet-237000/disk.qcow2
	I0912 15:26:15.900377    5278 main.go:141] libmachine: STDOUT: 
	I0912 15:26:15.900393    5278 main.go:141] libmachine: STDERR: 
	I0912 15:26:15.900441    5278 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kindnet-237000/disk.qcow2 +20000M
	I0912 15:26:15.908458    5278 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:26:15.908477    5278 main.go:141] libmachine: STDERR: 
	I0912 15:26:15.908489    5278 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kindnet-237000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kindnet-237000/disk.qcow2
	I0912 15:26:15.908494    5278 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:26:15.908511    5278 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:26:15.908537    5278 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kindnet-237000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kindnet-237000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kindnet-237000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:32:a0:ee:bb:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kindnet-237000/disk.qcow2
	I0912 15:26:15.910243    5278 main.go:141] libmachine: STDOUT: 
	I0912 15:26:15.910265    5278 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:26:15.910283    5278 client.go:171] duration metric: took 288.8835ms to LocalClient.Create
	I0912 15:26:17.912464    5278 start.go:128] duration metric: took 2.314573625s to createHost
	I0912 15:26:17.912578    5278 start.go:83] releasing machines lock for "kindnet-237000", held for 2.314763583s
	W0912 15:26:17.912639    5278 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:26:17.919068    5278 out.go:177] * Deleting "kindnet-237000" in qemu2 ...
	W0912 15:26:17.965234    5278 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:26:17.965260    5278 start.go:729] Will try again in 5 seconds ...
	I0912 15:26:22.967315    5278 start.go:360] acquireMachinesLock for kindnet-237000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:26:22.967890    5278 start.go:364] duration metric: took 496.5µs to acquireMachinesLock for "kindnet-237000"
	I0912 15:26:22.968029    5278 start.go:93] Provisioning new machine with config: &{Name:kindnet-237000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:26:22.968350    5278 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:26:22.976002    5278 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:26:23.023567    5278 start.go:159] libmachine.API.Create for "kindnet-237000" (driver="qemu2")
	I0912 15:26:23.023640    5278 client.go:168] LocalClient.Create starting
	I0912 15:26:23.023759    5278 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:26:23.023829    5278 main.go:141] libmachine: Decoding PEM data...
	I0912 15:26:23.023848    5278 main.go:141] libmachine: Parsing certificate...
	I0912 15:26:23.023911    5278 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:26:23.023961    5278 main.go:141] libmachine: Decoding PEM data...
	I0912 15:26:23.023973    5278 main.go:141] libmachine: Parsing certificate...
	I0912 15:26:23.024518    5278 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:26:23.193671    5278 main.go:141] libmachine: Creating SSH key...
	I0912 15:26:23.252664    5278 main.go:141] libmachine: Creating Disk image...
	I0912 15:26:23.252673    5278 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:26:23.252859    5278 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kindnet-237000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kindnet-237000/disk.qcow2
	I0912 15:26:23.262051    5278 main.go:141] libmachine: STDOUT: 
	I0912 15:26:23.262068    5278 main.go:141] libmachine: STDERR: 
	I0912 15:26:23.262128    5278 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kindnet-237000/disk.qcow2 +20000M
	I0912 15:26:23.269951    5278 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:26:23.269968    5278 main.go:141] libmachine: STDERR: 
	I0912 15:26:23.269984    5278 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kindnet-237000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kindnet-237000/disk.qcow2
	I0912 15:26:23.269989    5278 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:26:23.269997    5278 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:26:23.270021    5278 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kindnet-237000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kindnet-237000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kindnet-237000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:20:48:ac:e0:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kindnet-237000/disk.qcow2
	I0912 15:26:23.271764    5278 main.go:141] libmachine: STDOUT: 
	I0912 15:26:23.271781    5278 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:26:23.271793    5278 client.go:171] duration metric: took 248.148375ms to LocalClient.Create
	I0912 15:26:25.273955    5278 start.go:128] duration metric: took 2.305589084s to createHost
	I0912 15:26:25.274047    5278 start.go:83] releasing machines lock for "kindnet-237000", held for 2.306186708s
	W0912 15:26:25.274389    5278 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-237000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-237000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:26:25.282935    5278 out.go:201] 
	W0912 15:26:25.292911    5278 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:26:25.292942    5278 out.go:270] * 
	* 
	W0912 15:26:25.294171    5278 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:26:25.307930    5278 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-237000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-237000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.758509334s)

                                                
                                                
-- stdout --
	* [calico-237000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-237000" primary control-plane node in "calico-237000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-237000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:26:27.540063    5391 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:26:27.540192    5391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:26:27.540195    5391 out.go:358] Setting ErrFile to fd 2...
	I0912 15:26:27.540198    5391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:26:27.540322    5391 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:26:27.541451    5391 out.go:352] Setting JSON to false
	I0912 15:26:27.557830    5391 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5151,"bootTime":1726174836,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:26:27.557903    5391 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:26:27.564291    5391 out.go:177] * [calico-237000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:26:27.572121    5391 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:26:27.572184    5391 notify.go:220] Checking for updates...
	I0912 15:26:27.579090    5391 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:26:27.582081    5391 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:26:27.583540    5391 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:26:27.587053    5391 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:26:27.590100    5391 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:26:27.593402    5391 config.go:182] Loaded profile config "multinode-323000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:26:27.593463    5391 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0912 15:26:27.593511    5391 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:26:27.598054    5391 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:26:27.605067    5391 start.go:297] selected driver: qemu2
	I0912 15:26:27.605073    5391 start.go:901] validating driver "qemu2" against <nil>
	I0912 15:26:27.605079    5391 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:26:27.607347    5391 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 15:26:27.610037    5391 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:26:27.613165    5391 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:26:27.613187    5391 cni.go:84] Creating CNI manager for "calico"
	I0912 15:26:27.613194    5391 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0912 15:26:27.613223    5391 start.go:340] cluster config:
	{Name:calico-237000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:26:27.616642    5391 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:26:27.624149    5391 out.go:177] * Starting "calico-237000" primary control-plane node in "calico-237000" cluster
	I0912 15:26:27.628130    5391 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 15:26:27.628141    5391 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 15:26:27.628146    5391 cache.go:56] Caching tarball of preloaded images
	I0912 15:26:27.628191    5391 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:26:27.628196    5391 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 15:26:27.628250    5391 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/calico-237000/config.json ...
	I0912 15:26:27.628260    5391 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/calico-237000/config.json: {Name:mkb2519734245d7318c71f687a6eebe69a445d35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:26:27.628454    5391 start.go:360] acquireMachinesLock for calico-237000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:26:27.628484    5391 start.go:364] duration metric: took 24.375µs to acquireMachinesLock for "calico-237000"
	I0912 15:26:27.628495    5391 start.go:93] Provisioning new machine with config: &{Name:calico-237000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:26:27.628522    5391 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:26:27.637096    5391 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:26:27.652997    5391 start.go:159] libmachine.API.Create for "calico-237000" (driver="qemu2")
	I0912 15:26:27.653031    5391 client.go:168] LocalClient.Create starting
	I0912 15:26:27.653102    5391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:26:27.653131    5391 main.go:141] libmachine: Decoding PEM data...
	I0912 15:26:27.653141    5391 main.go:141] libmachine: Parsing certificate...
	I0912 15:26:27.653176    5391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:26:27.653198    5391 main.go:141] libmachine: Decoding PEM data...
	I0912 15:26:27.653206    5391 main.go:141] libmachine: Parsing certificate...
	I0912 15:26:27.653541    5391 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:26:27.819555    5391 main.go:141] libmachine: Creating SSH key...
	I0912 15:26:27.859559    5391 main.go:141] libmachine: Creating Disk image...
	I0912 15:26:27.859564    5391 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:26:27.859777    5391 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/calico-237000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/calico-237000/disk.qcow2
	I0912 15:26:27.869011    5391 main.go:141] libmachine: STDOUT: 
	I0912 15:26:27.869036    5391 main.go:141] libmachine: STDERR: 
	I0912 15:26:27.869095    5391 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/calico-237000/disk.qcow2 +20000M
	I0912 15:26:27.877079    5391 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:26:27.877099    5391 main.go:141] libmachine: STDERR: 
	I0912 15:26:27.877110    5391 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/calico-237000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/calico-237000/disk.qcow2
	I0912 15:26:27.877115    5391 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:26:27.877125    5391 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:26:27.877163    5391 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/calico-237000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/calico-237000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/calico-237000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:c8:01:82:d5:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/calico-237000/disk.qcow2
	I0912 15:26:27.879020    5391 main.go:141] libmachine: STDOUT: 
	I0912 15:26:27.879034    5391 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:26:27.879053    5391 client.go:171] duration metric: took 226.021542ms to LocalClient.Create
	I0912 15:26:29.881225    5391 start.go:128] duration metric: took 2.252724042s to createHost
	I0912 15:26:29.881323    5391 start.go:83] releasing machines lock for "calico-237000", held for 2.252880625s
	W0912 15:26:29.881378    5391 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:26:29.892161    5391 out.go:177] * Deleting "calico-237000" in qemu2 ...
	W0912 15:26:29.926193    5391 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:26:29.926227    5391 start.go:729] Will try again in 5 seconds ...
	I0912 15:26:34.928401    5391 start.go:360] acquireMachinesLock for calico-237000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:26:34.928963    5391 start.go:364] duration metric: took 463.25µs to acquireMachinesLock for "calico-237000"
	I0912 15:26:34.929135    5391 start.go:93] Provisioning new machine with config: &{Name:calico-237000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:26:34.929518    5391 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:26:34.935217    5391 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:26:34.986558    5391 start.go:159] libmachine.API.Create for "calico-237000" (driver="qemu2")
	I0912 15:26:34.986619    5391 client.go:168] LocalClient.Create starting
	I0912 15:26:34.986745    5391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:26:34.986813    5391 main.go:141] libmachine: Decoding PEM data...
	I0912 15:26:34.986829    5391 main.go:141] libmachine: Parsing certificate...
	I0912 15:26:34.986891    5391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:26:34.986935    5391 main.go:141] libmachine: Decoding PEM data...
	I0912 15:26:34.986964    5391 main.go:141] libmachine: Parsing certificate...
	I0912 15:26:34.987475    5391 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:26:35.157267    5391 main.go:141] libmachine: Creating SSH key...
	I0912 15:26:35.204831    5391 main.go:141] libmachine: Creating Disk image...
	I0912 15:26:35.204836    5391 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:26:35.205040    5391 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/calico-237000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/calico-237000/disk.qcow2
	I0912 15:26:35.214548    5391 main.go:141] libmachine: STDOUT: 
	I0912 15:26:35.214574    5391 main.go:141] libmachine: STDERR: 
	I0912 15:26:35.214628    5391 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/calico-237000/disk.qcow2 +20000M
	I0912 15:26:35.222837    5391 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:26:35.222855    5391 main.go:141] libmachine: STDERR: 
	I0912 15:26:35.222865    5391 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/calico-237000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/calico-237000/disk.qcow2
	I0912 15:26:35.222870    5391 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:26:35.222879    5391 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:26:35.222907    5391 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/calico-237000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/calico-237000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/calico-237000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:d1:34:d3:7f:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/calico-237000/disk.qcow2
	I0912 15:26:35.224614    5391 main.go:141] libmachine: STDOUT: 
	I0912 15:26:35.224632    5391 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:26:35.224655    5391 client.go:171] duration metric: took 238.032959ms to LocalClient.Create
	I0912 15:26:37.226815    5391 start.go:128] duration metric: took 2.297296208s to createHost
	I0912 15:26:37.226927    5391 start.go:83] releasing machines lock for "calico-237000", held for 2.297975875s
	W0912 15:26:37.227273    5391 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-237000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-237000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:26:37.236827    5391 out.go:201] 
	W0912 15:26:37.246033    5391 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:26:37.246064    5391 out.go:270] * 
	* 
	W0912 15:26:37.248524    5391 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:26:37.260820    5391 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-237000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-237000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.920122125s)

                                                
                                                
-- stdout --
	* [custom-flannel-237000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-237000" primary control-plane node in "custom-flannel-237000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-237000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:26:39.692490    5509 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:26:39.692605    5509 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:26:39.692609    5509 out.go:358] Setting ErrFile to fd 2...
	I0912 15:26:39.692611    5509 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:26:39.692725    5509 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:26:39.693734    5509 out.go:352] Setting JSON to false
	I0912 15:26:39.710038    5509 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5163,"bootTime":1726174836,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:26:39.710109    5509 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:26:39.715646    5509 out.go:177] * [custom-flannel-237000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:26:39.722871    5509 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:26:39.722940    5509 notify.go:220] Checking for updates...
	I0912 15:26:39.730820    5509 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:26:39.733852    5509 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:26:39.736842    5509 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:26:39.739823    5509 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:26:39.742847    5509 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:26:39.746255    5509 config.go:182] Loaded profile config "multinode-323000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:26:39.746316    5509 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0912 15:26:39.746370    5509 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:26:39.750822    5509 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:26:39.756725    5509 start.go:297] selected driver: qemu2
	I0912 15:26:39.756731    5509 start.go:901] validating driver "qemu2" against <nil>
	I0912 15:26:39.756737    5509 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:26:39.758823    5509 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 15:26:39.761916    5509 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:26:39.764900    5509 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:26:39.764924    5509 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0912 15:26:39.764932    5509 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0912 15:26:39.764969    5509 start.go:340] cluster config:
	{Name:custom-flannel-237000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:26:39.768273    5509 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:26:39.775796    5509 out.go:177] * Starting "custom-flannel-237000" primary control-plane node in "custom-flannel-237000" cluster
	I0912 15:26:39.779780    5509 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 15:26:39.779793    5509 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 15:26:39.779801    5509 cache.go:56] Caching tarball of preloaded images
	I0912 15:26:39.779852    5509 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:26:39.779857    5509 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 15:26:39.779917    5509 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/custom-flannel-237000/config.json ...
	I0912 15:26:39.779927    5509 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/custom-flannel-237000/config.json: {Name:mk0ebdef9716150d15772b1e6529c48c4daeff4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:26:39.780147    5509 start.go:360] acquireMachinesLock for custom-flannel-237000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:26:39.780186    5509 start.go:364] duration metric: took 29.75µs to acquireMachinesLock for "custom-flannel-237000"
	I0912 15:26:39.780199    5509 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-237000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:26:39.780223    5509 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:26:39.788883    5509 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:26:39.804540    5509 start.go:159] libmachine.API.Create for "custom-flannel-237000" (driver="qemu2")
	I0912 15:26:39.804565    5509 client.go:168] LocalClient.Create starting
	I0912 15:26:39.804648    5509 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:26:39.804678    5509 main.go:141] libmachine: Decoding PEM data...
	I0912 15:26:39.804690    5509 main.go:141] libmachine: Parsing certificate...
	I0912 15:26:39.804729    5509 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:26:39.804755    5509 main.go:141] libmachine: Decoding PEM data...
	I0912 15:26:39.804763    5509 main.go:141] libmachine: Parsing certificate...
	I0912 15:26:39.805073    5509 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:26:39.975982    5509 main.go:141] libmachine: Creating SSH key...
	I0912 15:26:40.146663    5509 main.go:141] libmachine: Creating Disk image...
	I0912 15:26:40.146670    5509 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:26:40.147050    5509 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/custom-flannel-237000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/custom-flannel-237000/disk.qcow2
	I0912 15:26:40.156668    5509 main.go:141] libmachine: STDOUT: 
	I0912 15:26:40.156691    5509 main.go:141] libmachine: STDERR: 
	I0912 15:26:40.156737    5509 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/custom-flannel-237000/disk.qcow2 +20000M
	I0912 15:26:40.164805    5509 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:26:40.164822    5509 main.go:141] libmachine: STDERR: 
	I0912 15:26:40.164832    5509 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/custom-flannel-237000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/custom-flannel-237000/disk.qcow2
	I0912 15:26:40.164835    5509 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:26:40.164851    5509 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:26:40.164897    5509 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/custom-flannel-237000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/custom-flannel-237000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/custom-flannel-237000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:80:24:91:1a:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/custom-flannel-237000/disk.qcow2
	I0912 15:26:40.166678    5509 main.go:141] libmachine: STDOUT: 
	I0912 15:26:40.166694    5509 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:26:40.166722    5509 client.go:171] duration metric: took 362.152417ms to LocalClient.Create
	I0912 15:26:42.168952    5509 start.go:128] duration metric: took 2.388747292s to createHost
	I0912 15:26:42.169038    5509 start.go:83] releasing machines lock for "custom-flannel-237000", held for 2.388894916s
	W0912 15:26:42.169108    5509 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:26:42.176584    5509 out.go:177] * Deleting "custom-flannel-237000" in qemu2 ...
	W0912 15:26:42.214725    5509 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:26:42.214759    5509 start.go:729] Will try again in 5 seconds ...
	I0912 15:26:47.216992    5509 start.go:360] acquireMachinesLock for custom-flannel-237000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:26:47.217635    5509 start.go:364] duration metric: took 504.458µs to acquireMachinesLock for "custom-flannel-237000"
	I0912 15:26:47.217717    5509 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-237000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:26:47.218009    5509 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:26:47.223746    5509 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:26:47.276588    5509 start.go:159] libmachine.API.Create for "custom-flannel-237000" (driver="qemu2")
	I0912 15:26:47.276644    5509 client.go:168] LocalClient.Create starting
	I0912 15:26:47.276774    5509 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:26:47.276864    5509 main.go:141] libmachine: Decoding PEM data...
	I0912 15:26:47.276882    5509 main.go:141] libmachine: Parsing certificate...
	I0912 15:26:47.276949    5509 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:26:47.276999    5509 main.go:141] libmachine: Decoding PEM data...
	I0912 15:26:47.277012    5509 main.go:141] libmachine: Parsing certificate...
	I0912 15:26:47.277594    5509 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:26:47.447318    5509 main.go:141] libmachine: Creating SSH key...
	I0912 15:26:47.524160    5509 main.go:141] libmachine: Creating Disk image...
	I0912 15:26:47.524167    5509 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:26:47.524421    5509 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/custom-flannel-237000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/custom-flannel-237000/disk.qcow2
	I0912 15:26:47.534095    5509 main.go:141] libmachine: STDOUT: 
	I0912 15:26:47.534117    5509 main.go:141] libmachine: STDERR: 
	I0912 15:26:47.534164    5509 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/custom-flannel-237000/disk.qcow2 +20000M
	I0912 15:26:47.542372    5509 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:26:47.542397    5509 main.go:141] libmachine: STDERR: 
	I0912 15:26:47.542409    5509 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/custom-flannel-237000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/custom-flannel-237000/disk.qcow2
	I0912 15:26:47.542414    5509 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:26:47.542423    5509 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:26:47.542457    5509 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/custom-flannel-237000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/custom-flannel-237000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/custom-flannel-237000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:74:c5:05:24:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/custom-flannel-237000/disk.qcow2
	I0912 15:26:47.544156    5509 main.go:141] libmachine: STDOUT: 
	I0912 15:26:47.544173    5509 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:26:47.544185    5509 client.go:171] duration metric: took 267.542542ms to LocalClient.Create
	I0912 15:26:49.546305    5509 start.go:128] duration metric: took 2.328321542s to createHost
	I0912 15:26:49.546361    5509 start.go:83] releasing machines lock for "custom-flannel-237000", held for 2.328754917s
	W0912 15:26:49.546739    5509 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-237000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-237000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:26:49.555128    5509 out.go:201] 
	W0912 15:26:49.562333    5509 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:26:49.562356    5509 out.go:270] * 
	* 
	W0912 15:26:49.563788    5509 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:26:49.574285    5509 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-237000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
E0912 15:26:52.706721    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/functional-384000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-237000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.930704666s)

                                                
                                                
-- stdout --
	* [false-237000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-237000" primary control-plane node in "false-237000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-237000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:26:51.935291    5635 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:26:51.935413    5635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:26:51.935416    5635 out.go:358] Setting ErrFile to fd 2...
	I0912 15:26:51.935418    5635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:26:51.935539    5635 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:26:51.936711    5635 out.go:352] Setting JSON to false
	I0912 15:26:51.953128    5635 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5175,"bootTime":1726174836,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:26:51.953191    5635 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:26:51.957786    5635 out.go:177] * [false-237000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:26:51.965568    5635 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:26:51.965626    5635 notify.go:220] Checking for updates...
	I0912 15:26:51.973608    5635 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:26:51.976633    5635 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:26:51.979578    5635 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:26:51.982520    5635 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:26:51.985586    5635 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:26:51.988905    5635 config.go:182] Loaded profile config "multinode-323000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:26:51.988971    5635 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0912 15:26:51.989020    5635 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:26:51.993567    5635 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:26:52.000639    5635 start.go:297] selected driver: qemu2
	I0912 15:26:52.000645    5635 start.go:901] validating driver "qemu2" against <nil>
	I0912 15:26:52.000651    5635 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:26:52.002849    5635 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 15:26:52.005603    5635 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:26:52.008643    5635 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:26:52.008660    5635 cni.go:84] Creating CNI manager for "false"
	I0912 15:26:52.008685    5635 start.go:340] cluster config:
	{Name:false-237000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:26:52.012005    5635 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:26:52.019589    5635 out.go:177] * Starting "false-237000" primary control-plane node in "false-237000" cluster
	I0912 15:26:52.023649    5635 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 15:26:52.023668    5635 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 15:26:52.023679    5635 cache.go:56] Caching tarball of preloaded images
	I0912 15:26:52.023739    5635 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:26:52.023747    5635 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 15:26:52.023811    5635 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/false-237000/config.json ...
	I0912 15:26:52.023823    5635 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/false-237000/config.json: {Name:mk19ed6c18d8b6b2a0ca30da086a5bb5c9356bac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:26:52.024038    5635 start.go:360] acquireMachinesLock for false-237000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:26:52.024068    5635 start.go:364] duration metric: took 25.792µs to acquireMachinesLock for "false-237000"
	I0912 15:26:52.024080    5635 start.go:93] Provisioning new machine with config: &{Name:false-237000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:26:52.024114    5635 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:26:52.032593    5635 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:26:52.048336    5635 start.go:159] libmachine.API.Create for "false-237000" (driver="qemu2")
	I0912 15:26:52.048358    5635 client.go:168] LocalClient.Create starting
	I0912 15:26:52.048415    5635 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:26:52.048443    5635 main.go:141] libmachine: Decoding PEM data...
	I0912 15:26:52.048450    5635 main.go:141] libmachine: Parsing certificate...
	I0912 15:26:52.048490    5635 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:26:52.048515    5635 main.go:141] libmachine: Decoding PEM data...
	I0912 15:26:52.048523    5635 main.go:141] libmachine: Parsing certificate...
	I0912 15:26:52.048865    5635 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:26:52.211623    5635 main.go:141] libmachine: Creating SSH key...
	I0912 15:26:52.439415    5635 main.go:141] libmachine: Creating Disk image...
	I0912 15:26:52.439424    5635 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:26:52.439686    5635 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/false-237000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/false-237000/disk.qcow2
	I0912 15:26:52.449425    5635 main.go:141] libmachine: STDOUT: 
	I0912 15:26:52.449449    5635 main.go:141] libmachine: STDERR: 
	I0912 15:26:52.449514    5635 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/false-237000/disk.qcow2 +20000M
	I0912 15:26:52.457389    5635 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:26:52.457413    5635 main.go:141] libmachine: STDERR: 
	I0912 15:26:52.457425    5635 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/false-237000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/false-237000/disk.qcow2
	I0912 15:26:52.457429    5635 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:26:52.457438    5635 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:26:52.457469    5635 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/false-237000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/false-237000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/false-237000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:8c:0d:4b:1e:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/false-237000/disk.qcow2
	I0912 15:26:52.459114    5635 main.go:141] libmachine: STDOUT: 
	I0912 15:26:52.459130    5635 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:26:52.459159    5635 client.go:171] duration metric: took 410.805958ms to LocalClient.Create
	I0912 15:26:54.460730    5635 start.go:128] duration metric: took 2.436656417s to createHost
	I0912 15:26:54.460766    5635 start.go:83] releasing machines lock for "false-237000", held for 2.4367455s
	W0912 15:26:54.460816    5635 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:26:54.471019    5635 out.go:177] * Deleting "false-237000" in qemu2 ...
	W0912 15:26:54.496614    5635 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:26:54.496628    5635 start.go:729] Will try again in 5 seconds ...
	I0912 15:26:59.498739    5635 start.go:360] acquireMachinesLock for false-237000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:26:59.499189    5635 start.go:364] duration metric: took 349.75µs to acquireMachinesLock for "false-237000"
	I0912 15:26:59.499306    5635 start.go:93] Provisioning new machine with config: &{Name:false-237000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:26:59.499533    5635 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:26:59.508165    5635 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:26:59.550777    5635 start.go:159] libmachine.API.Create for "false-237000" (driver="qemu2")
	I0912 15:26:59.550820    5635 client.go:168] LocalClient.Create starting
	I0912 15:26:59.550921    5635 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:26:59.550984    5635 main.go:141] libmachine: Decoding PEM data...
	I0912 15:26:59.550997    5635 main.go:141] libmachine: Parsing certificate...
	I0912 15:26:59.551066    5635 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:26:59.551105    5635 main.go:141] libmachine: Decoding PEM data...
	I0912 15:26:59.551117    5635 main.go:141] libmachine: Parsing certificate...
	I0912 15:26:59.551623    5635 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:26:59.716850    5635 main.go:141] libmachine: Creating SSH key...
	I0912 15:26:59.776035    5635 main.go:141] libmachine: Creating Disk image...
	I0912 15:26:59.776045    5635 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:26:59.776309    5635 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/false-237000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/false-237000/disk.qcow2
	I0912 15:26:59.786708    5635 main.go:141] libmachine: STDOUT: 
	I0912 15:26:59.786731    5635 main.go:141] libmachine: STDERR: 
	I0912 15:26:59.786816    5635 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/false-237000/disk.qcow2 +20000M
	I0912 15:26:59.796042    5635 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:26:59.796064    5635 main.go:141] libmachine: STDERR: 
	I0912 15:26:59.796105    5635 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/false-237000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/false-237000/disk.qcow2
	I0912 15:26:59.796110    5635 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:26:59.796121    5635 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:26:59.796151    5635 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/false-237000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/false-237000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/false-237000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:02:e9:cc:5a:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/false-237000/disk.qcow2
	I0912 15:26:59.798003    5635 main.go:141] libmachine: STDOUT: 
	I0912 15:26:59.798022    5635 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:26:59.798046    5635 client.go:171] duration metric: took 247.227ms to LocalClient.Create
	I0912 15:27:01.800225    5635 start.go:128] duration metric: took 2.30070175s to createHost
	I0912 15:27:01.800301    5635 start.go:83] releasing machines lock for "false-237000", held for 2.301142083s
	W0912 15:27:01.800686    5635 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-237000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-237000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:27:01.809335    5635 out.go:201] 
	W0912 15:27:01.812557    5635 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:27:01.812613    5635 out.go:270] * 
	* 
	W0912 15:27:01.815578    5635 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:27:01.825436    5635 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-237000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
E0912 15:27:06.924066    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-237000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.797281333s)

                                                
                                                
-- stdout --
	* [enable-default-cni-237000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-237000" primary control-plane node in "enable-default-cni-237000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-237000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:27:04.036111    5744 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:27:04.036236    5744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:27:04.036239    5744 out.go:358] Setting ErrFile to fd 2...
	I0912 15:27:04.036242    5744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:27:04.036399    5744 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:27:04.037497    5744 out.go:352] Setting JSON to false
	I0912 15:27:04.053979    5744 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5188,"bootTime":1726174836,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:27:04.054054    5744 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:27:04.061244    5744 out.go:177] * [enable-default-cni-237000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:27:04.069250    5744 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:27:04.069292    5744 notify.go:220] Checking for updates...
	I0912 15:27:04.075238    5744 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:27:04.078218    5744 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:27:04.081255    5744 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:27:04.084333    5744 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:27:04.087233    5744 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:27:04.090566    5744 config.go:182] Loaded profile config "multinode-323000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:27:04.090636    5744 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0912 15:27:04.090688    5744 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:27:04.095264    5744 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:27:04.102260    5744 start.go:297] selected driver: qemu2
	I0912 15:27:04.102266    5744 start.go:901] validating driver "qemu2" against <nil>
	I0912 15:27:04.102273    5744 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:27:04.104579    5744 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 15:27:04.107312    5744 out.go:177] * Automatically selected the socket_vmnet network
	E0912 15:27:04.110278    5744 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0912 15:27:04.110296    5744 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:27:04.110338    5744 cni.go:84] Creating CNI manager for "bridge"
	I0912 15:27:04.110344    5744 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 15:27:04.110395    5744 start.go:340] cluster config:
	{Name:enable-default-cni-237000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:27:04.113990    5744 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:27:04.121290    5744 out.go:177] * Starting "enable-default-cni-237000" primary control-plane node in "enable-default-cni-237000" cluster
	I0912 15:27:04.125257    5744 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 15:27:04.125275    5744 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 15:27:04.125283    5744 cache.go:56] Caching tarball of preloaded images
	I0912 15:27:04.125345    5744 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:27:04.125351    5744 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 15:27:04.125418    5744 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/enable-default-cni-237000/config.json ...
	I0912 15:27:04.125429    5744 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/enable-default-cni-237000/config.json: {Name:mk875bcabd1eb3113eadf599cf4ac134bf0fc1e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:27:04.125646    5744 start.go:360] acquireMachinesLock for enable-default-cni-237000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:27:04.125679    5744 start.go:364] duration metric: took 25.708µs to acquireMachinesLock for "enable-default-cni-237000"
	I0912 15:27:04.125692    5744 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-237000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:27:04.125720    5744 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:27:04.134234    5744 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:27:04.150982    5744 start.go:159] libmachine.API.Create for "enable-default-cni-237000" (driver="qemu2")
	I0912 15:27:04.151012    5744 client.go:168] LocalClient.Create starting
	I0912 15:27:04.151075    5744 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:27:04.151112    5744 main.go:141] libmachine: Decoding PEM data...
	I0912 15:27:04.151120    5744 main.go:141] libmachine: Parsing certificate...
	I0912 15:27:04.151155    5744 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:27:04.151180    5744 main.go:141] libmachine: Decoding PEM data...
	I0912 15:27:04.151185    5744 main.go:141] libmachine: Parsing certificate...
	I0912 15:27:04.151563    5744 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:27:04.312975    5744 main.go:141] libmachine: Creating SSH key...
	I0912 15:27:04.388565    5744 main.go:141] libmachine: Creating Disk image...
	I0912 15:27:04.388571    5744 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:27:04.388775    5744 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/enable-default-cni-237000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/enable-default-cni-237000/disk.qcow2
	I0912 15:27:04.398077    5744 main.go:141] libmachine: STDOUT: 
	I0912 15:27:04.398098    5744 main.go:141] libmachine: STDERR: 
	I0912 15:27:04.398161    5744 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/enable-default-cni-237000/disk.qcow2 +20000M
	I0912 15:27:04.406314    5744 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:27:04.406328    5744 main.go:141] libmachine: STDERR: 
	I0912 15:27:04.406340    5744 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/enable-default-cni-237000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/enable-default-cni-237000/disk.qcow2
	I0912 15:27:04.406347    5744 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:27:04.406360    5744 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:27:04.406388    5744 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/enable-default-cni-237000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/enable-default-cni-237000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/enable-default-cni-237000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:0a:b1:01:85:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/enable-default-cni-237000/disk.qcow2
	I0912 15:27:04.408020    5744 main.go:141] libmachine: STDOUT: 
	I0912 15:27:04.408037    5744 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:27:04.408055    5744 client.go:171] duration metric: took 257.044709ms to LocalClient.Create
	I0912 15:27:06.410211    5744 start.go:128] duration metric: took 2.284511334s to createHost
	I0912 15:27:06.410320    5744 start.go:83] releasing machines lock for "enable-default-cni-237000", held for 2.284680833s
	W0912 15:27:06.410386    5744 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:27:06.420745    5744 out.go:177] * Deleting "enable-default-cni-237000" in qemu2 ...
	W0912 15:27:06.462347    5744 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:27:06.462375    5744 start.go:729] Will try again in 5 seconds ...
	I0912 15:27:11.464574    5744 start.go:360] acquireMachinesLock for enable-default-cni-237000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:27:11.465156    5744 start.go:364] duration metric: took 480.625µs to acquireMachinesLock for "enable-default-cni-237000"
	I0912 15:27:11.465239    5744 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-237000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:27:11.465487    5744 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:27:11.474800    5744 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:27:11.525182    5744 start.go:159] libmachine.API.Create for "enable-default-cni-237000" (driver="qemu2")
	I0912 15:27:11.525247    5744 client.go:168] LocalClient.Create starting
	I0912 15:27:11.525377    5744 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:27:11.525450    5744 main.go:141] libmachine: Decoding PEM data...
	I0912 15:27:11.525466    5744 main.go:141] libmachine: Parsing certificate...
	I0912 15:27:11.525525    5744 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:27:11.525574    5744 main.go:141] libmachine: Decoding PEM data...
	I0912 15:27:11.525588    5744 main.go:141] libmachine: Parsing certificate...
	I0912 15:27:11.526144    5744 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:27:11.696227    5744 main.go:141] libmachine: Creating SSH key...
	I0912 15:27:11.737695    5744 main.go:141] libmachine: Creating Disk image...
	I0912 15:27:11.737700    5744 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:27:11.737912    5744 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/enable-default-cni-237000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/enable-default-cni-237000/disk.qcow2
	I0912 15:27:11.747063    5744 main.go:141] libmachine: STDOUT: 
	I0912 15:27:11.747084    5744 main.go:141] libmachine: STDERR: 
	I0912 15:27:11.747131    5744 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/enable-default-cni-237000/disk.qcow2 +20000M
	I0912 15:27:11.754977    5744 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:27:11.755003    5744 main.go:141] libmachine: STDERR: 
	I0912 15:27:11.755016    5744 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/enable-default-cni-237000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/enable-default-cni-237000/disk.qcow2
	I0912 15:27:11.755020    5744 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:27:11.755035    5744 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:27:11.755058    5744 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/enable-default-cni-237000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/enable-default-cni-237000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/enable-default-cni-237000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:77:bc:21:48:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/enable-default-cni-237000/disk.qcow2
	I0912 15:27:11.756721    5744 main.go:141] libmachine: STDOUT: 
	I0912 15:27:11.756738    5744 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:27:11.756751    5744 client.go:171] duration metric: took 231.500708ms to LocalClient.Create
	I0912 15:27:13.759012    5744 start.go:128] duration metric: took 2.293461625s to createHost
	I0912 15:27:13.759092    5744 start.go:83] releasing machines lock for "enable-default-cni-237000", held for 2.293960167s
	W0912 15:27:13.759432    5744 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-237000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-237000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:27:13.767956    5744 out.go:201] 
	W0912 15:27:13.778126    5744 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:27:13.778175    5744 out.go:270] * 
	* 
	W0912 15:27:13.780812    5744 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:27:13.790970    5744 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-237000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-237000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.84553825s)

                                                
                                                
-- stdout --
	* [flannel-237000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-237000" primary control-plane node in "flannel-237000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-237000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:27:16.041015    5856 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:27:16.041139    5856 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:27:16.041141    5856 out.go:358] Setting ErrFile to fd 2...
	I0912 15:27:16.041143    5856 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:27:16.041285    5856 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:27:16.042459    5856 out.go:352] Setting JSON to false
	I0912 15:27:16.059260    5856 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5200,"bootTime":1726174836,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:27:16.059351    5856 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:27:16.065392    5856 out.go:177] * [flannel-237000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:27:16.073607    5856 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:27:16.073702    5856 notify.go:220] Checking for updates...
	I0912 15:27:16.079503    5856 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:27:16.082583    5856 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:27:16.085459    5856 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:27:16.088551    5856 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:27:16.091525    5856 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:27:16.093401    5856 config.go:182] Loaded profile config "multinode-323000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:27:16.093462    5856 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0912 15:27:16.093506    5856 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:27:16.097527    5856 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:27:16.104396    5856 start.go:297] selected driver: qemu2
	I0912 15:27:16.104402    5856 start.go:901] validating driver "qemu2" against <nil>
	I0912 15:27:16.104408    5856 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:27:16.106529    5856 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 15:27:16.109505    5856 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:27:16.112806    5856 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:27:16.112843    5856 cni.go:84] Creating CNI manager for "flannel"
	I0912 15:27:16.112848    5856 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0912 15:27:16.112873    5856 start.go:340] cluster config:
	{Name:flannel-237000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:27:16.116250    5856 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:27:16.123525    5856 out.go:177] * Starting "flannel-237000" primary control-plane node in "flannel-237000" cluster
	I0912 15:27:16.127628    5856 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 15:27:16.127641    5856 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 15:27:16.127654    5856 cache.go:56] Caching tarball of preloaded images
	I0912 15:27:16.127701    5856 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:27:16.127706    5856 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 15:27:16.127754    5856 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/flannel-237000/config.json ...
	I0912 15:27:16.127764    5856 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/flannel-237000/config.json: {Name:mk6e66bbc91d75a1dd4ab1a3fbc667e324d99a09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:27:16.127961    5856 start.go:360] acquireMachinesLock for flannel-237000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:27:16.127990    5856 start.go:364] duration metric: took 24.5µs to acquireMachinesLock for "flannel-237000"
	I0912 15:27:16.128002    5856 start.go:93] Provisioning new machine with config: &{Name:flannel-237000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:27:16.128031    5856 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:27:16.135516    5856 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:27:16.151484    5856 start.go:159] libmachine.API.Create for "flannel-237000" (driver="qemu2")
	I0912 15:27:16.151509    5856 client.go:168] LocalClient.Create starting
	I0912 15:27:16.151577    5856 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:27:16.151608    5856 main.go:141] libmachine: Decoding PEM data...
	I0912 15:27:16.151623    5856 main.go:141] libmachine: Parsing certificate...
	I0912 15:27:16.151662    5856 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:27:16.151686    5856 main.go:141] libmachine: Decoding PEM data...
	I0912 15:27:16.151697    5856 main.go:141] libmachine: Parsing certificate...
	I0912 15:27:16.152110    5856 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:27:16.313572    5856 main.go:141] libmachine: Creating SSH key...
	I0912 15:27:16.388293    5856 main.go:141] libmachine: Creating Disk image...
	I0912 15:27:16.388300    5856 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:27:16.388544    5856 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/flannel-237000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/flannel-237000/disk.qcow2
	I0912 15:27:16.398207    5856 main.go:141] libmachine: STDOUT: 
	I0912 15:27:16.398234    5856 main.go:141] libmachine: STDERR: 
	I0912 15:27:16.398289    5856 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/flannel-237000/disk.qcow2 +20000M
	I0912 15:27:16.406436    5856 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:27:16.406452    5856 main.go:141] libmachine: STDERR: 
	I0912 15:27:16.406466    5856 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/flannel-237000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/flannel-237000/disk.qcow2
	I0912 15:27:16.406469    5856 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:27:16.406481    5856 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:27:16.406510    5856 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/flannel-237000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/flannel-237000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/flannel-237000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:93:fe:8d:69:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/flannel-237000/disk.qcow2
	I0912 15:27:16.408132    5856 main.go:141] libmachine: STDOUT: 
	I0912 15:27:16.408153    5856 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:27:16.408173    5856 client.go:171] duration metric: took 256.665ms to LocalClient.Create
	I0912 15:27:18.410345    5856 start.go:128] duration metric: took 2.282334792s to createHost
	I0912 15:27:18.410543    5856 start.go:83] releasing machines lock for "flannel-237000", held for 2.282522375s
	W0912 15:27:18.410628    5856 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:27:18.421040    5856 out.go:177] * Deleting "flannel-237000" in qemu2 ...
	W0912 15:27:18.461882    5856 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:27:18.461916    5856 start.go:729] Will try again in 5 seconds ...
	I0912 15:27:23.464072    5856 start.go:360] acquireMachinesLock for flannel-237000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:27:23.464619    5856 start.go:364] duration metric: took 402.334µs to acquireMachinesLock for "flannel-237000"
	I0912 15:27:23.464756    5856 start.go:93] Provisioning new machine with config: &{Name:flannel-237000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:27:23.465227    5856 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:27:23.472943    5856 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:27:23.518803    5856 start.go:159] libmachine.API.Create for "flannel-237000" (driver="qemu2")
	I0912 15:27:23.518853    5856 client.go:168] LocalClient.Create starting
	I0912 15:27:23.518961    5856 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:27:23.519032    5856 main.go:141] libmachine: Decoding PEM data...
	I0912 15:27:23.519051    5856 main.go:141] libmachine: Parsing certificate...
	I0912 15:27:23.519119    5856 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:27:23.519160    5856 main.go:141] libmachine: Decoding PEM data...
	I0912 15:27:23.519177    5856 main.go:141] libmachine: Parsing certificate...
	I0912 15:27:23.519828    5856 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:27:23.706981    5856 main.go:141] libmachine: Creating SSH key...
	I0912 15:27:23.788307    5856 main.go:141] libmachine: Creating Disk image...
	I0912 15:27:23.788313    5856 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:27:23.788541    5856 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/flannel-237000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/flannel-237000/disk.qcow2
	I0912 15:27:23.797688    5856 main.go:141] libmachine: STDOUT: 
	I0912 15:27:23.797711    5856 main.go:141] libmachine: STDERR: 
	I0912 15:27:23.797752    5856 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/flannel-237000/disk.qcow2 +20000M
	I0912 15:27:23.805675    5856 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:27:23.805695    5856 main.go:141] libmachine: STDERR: 
	I0912 15:27:23.805710    5856 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/flannel-237000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/flannel-237000/disk.qcow2
	I0912 15:27:23.805713    5856 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:27:23.805724    5856 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:27:23.805749    5856 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/flannel-237000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/flannel-237000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/flannel-237000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:c5:ca:d2:ed:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/flannel-237000/disk.qcow2
	I0912 15:27:23.807396    5856 main.go:141] libmachine: STDOUT: 
	I0912 15:27:23.807418    5856 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:27:23.807431    5856 client.go:171] duration metric: took 288.5805ms to LocalClient.Create
	I0912 15:27:25.809612    5856 start.go:128] duration metric: took 2.344398416s to createHost
	I0912 15:27:25.809744    5856 start.go:83] releasing machines lock for "flannel-237000", held for 2.345150959s
	W0912 15:27:25.810108    5856 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-237000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-237000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:27:25.823679    5856 out.go:201] 
	W0912 15:27:25.827781    5856 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:27:25.827862    5856 out.go:270] * 
	* 
	W0912 15:27:25.830205    5856 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:27:25.843590    5856 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-237000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-237000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.826827s)

                                                
                                                
-- stdout --
	* [bridge-237000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-237000" primary control-plane node in "bridge-237000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-237000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:27:28.240328    5973 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:27:28.240451    5973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:27:28.240456    5973 out.go:358] Setting ErrFile to fd 2...
	I0912 15:27:28.240458    5973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:27:28.240585    5973 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:27:28.241750    5973 out.go:352] Setting JSON to false
	I0912 15:27:28.258707    5973 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5212,"bootTime":1726174836,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:27:28.258781    5973 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:27:28.265575    5973 out.go:177] * [bridge-237000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:27:28.273568    5973 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:27:28.273604    5973 notify.go:220] Checking for updates...
	I0912 15:27:28.281571    5973 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:27:28.283035    5973 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:27:28.286502    5973 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:27:28.289560    5973 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:27:28.292566    5973 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:27:28.295813    5973 config.go:182] Loaded profile config "multinode-323000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:27:28.295879    5973 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0912 15:27:28.295932    5973 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:27:28.300561    5973 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:27:28.307506    5973 start.go:297] selected driver: qemu2
	I0912 15:27:28.307511    5973 start.go:901] validating driver "qemu2" against <nil>
	I0912 15:27:28.307517    5973 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:27:28.309775    5973 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 15:27:28.312493    5973 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:27:28.315589    5973 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:27:28.315621    5973 cni.go:84] Creating CNI manager for "bridge"
	I0912 15:27:28.315625    5973 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 15:27:28.315656    5973 start.go:340] cluster config:
	{Name:bridge-237000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:27:28.319107    5973 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:27:28.322568    5973 out.go:177] * Starting "bridge-237000" primary control-plane node in "bridge-237000" cluster
	I0912 15:27:28.329523    5973 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 15:27:28.329538    5973 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 15:27:28.329547    5973 cache.go:56] Caching tarball of preloaded images
	I0912 15:27:28.329606    5973 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:27:28.329614    5973 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 15:27:28.329683    5973 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/bridge-237000/config.json ...
	I0912 15:27:28.329696    5973 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/bridge-237000/config.json: {Name:mk73cfbb65aaac2168ab5f23a85cc3c43c43e867 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:27:28.329916    5973 start.go:360] acquireMachinesLock for bridge-237000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:27:28.329948    5973 start.go:364] duration metric: took 26.084µs to acquireMachinesLock for "bridge-237000"
	I0912 15:27:28.329960    5973 start.go:93] Provisioning new machine with config: &{Name:bridge-237000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:27:28.329983    5973 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:27:28.337435    5973 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:27:28.352259    5973 start.go:159] libmachine.API.Create for "bridge-237000" (driver="qemu2")
	I0912 15:27:28.352287    5973 client.go:168] LocalClient.Create starting
	I0912 15:27:28.352344    5973 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:27:28.352372    5973 main.go:141] libmachine: Decoding PEM data...
	I0912 15:27:28.352379    5973 main.go:141] libmachine: Parsing certificate...
	I0912 15:27:28.352415    5973 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:27:28.352438    5973 main.go:141] libmachine: Decoding PEM data...
	I0912 15:27:28.352448    5973 main.go:141] libmachine: Parsing certificate...
	I0912 15:27:28.352786    5973 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:27:28.514995    5973 main.go:141] libmachine: Creating SSH key...
	I0912 15:27:28.567645    5973 main.go:141] libmachine: Creating Disk image...
	I0912 15:27:28.567651    5973 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:27:28.567843    5973 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/bridge-237000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/bridge-237000/disk.qcow2
	I0912 15:27:28.577027    5973 main.go:141] libmachine: STDOUT: 
	I0912 15:27:28.577056    5973 main.go:141] libmachine: STDERR: 
	I0912 15:27:28.577118    5973 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/bridge-237000/disk.qcow2 +20000M
	I0912 15:27:28.585253    5973 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:27:28.585270    5973 main.go:141] libmachine: STDERR: 
	I0912 15:27:28.585287    5973 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/bridge-237000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/bridge-237000/disk.qcow2
	I0912 15:27:28.585293    5973 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:27:28.585311    5973 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:27:28.585338    5973 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/bridge-237000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/bridge-237000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/bridge-237000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:8d:8b:b7:08:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/bridge-237000/disk.qcow2
	I0912 15:27:28.586968    5973 main.go:141] libmachine: STDOUT: 
	I0912 15:27:28.586984    5973 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:27:28.587011    5973 client.go:171] duration metric: took 234.724125ms to LocalClient.Create
	I0912 15:27:30.589077    5973 start.go:128] duration metric: took 2.259139167s to createHost
	I0912 15:27:30.589090    5973 start.go:83] releasing machines lock for "bridge-237000", held for 2.259189375s
	W0912 15:27:30.589113    5973 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:27:30.597527    5973 out.go:177] * Deleting "bridge-237000" in qemu2 ...
	W0912 15:27:30.616492    5973 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:27:30.616503    5973 start.go:729] Will try again in 5 seconds ...
	I0912 15:27:35.618558    5973 start.go:360] acquireMachinesLock for bridge-237000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:27:35.619006    5973 start.go:364] duration metric: took 363.167µs to acquireMachinesLock for "bridge-237000"
	I0912 15:27:35.619182    5973 start.go:93] Provisioning new machine with config: &{Name:bridge-237000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:27:35.619412    5973 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:27:35.624972    5973 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:27:35.663796    5973 start.go:159] libmachine.API.Create for "bridge-237000" (driver="qemu2")
	I0912 15:27:35.663859    5973 client.go:168] LocalClient.Create starting
	I0912 15:27:35.663962    5973 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:27:35.664017    5973 main.go:141] libmachine: Decoding PEM data...
	I0912 15:27:35.664034    5973 main.go:141] libmachine: Parsing certificate...
	I0912 15:27:35.664102    5973 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:27:35.664140    5973 main.go:141] libmachine: Decoding PEM data...
	I0912 15:27:35.664149    5973 main.go:141] libmachine: Parsing certificate...
	I0912 15:27:35.664835    5973 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:27:35.832281    5973 main.go:141] libmachine: Creating SSH key...
	I0912 15:27:35.969181    5973 main.go:141] libmachine: Creating Disk image...
	I0912 15:27:35.969190    5973 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:27:35.969450    5973 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/bridge-237000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/bridge-237000/disk.qcow2
	I0912 15:27:35.978860    5973 main.go:141] libmachine: STDOUT: 
	I0912 15:27:35.978881    5973 main.go:141] libmachine: STDERR: 
	I0912 15:27:35.978935    5973 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/bridge-237000/disk.qcow2 +20000M
	I0912 15:27:35.987030    5973 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:27:35.987046    5973 main.go:141] libmachine: STDERR: 
	I0912 15:27:35.987056    5973 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/bridge-237000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/bridge-237000/disk.qcow2
	I0912 15:27:35.987061    5973 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:27:35.987073    5973 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:27:35.987106    5973 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/bridge-237000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/bridge-237000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/bridge-237000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:f4:65:7f:89:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/bridge-237000/disk.qcow2
	I0912 15:27:35.988733    5973 main.go:141] libmachine: STDOUT: 
	I0912 15:27:35.988749    5973 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:27:35.988763    5973 client.go:171] duration metric: took 324.9055ms to LocalClient.Create
	I0912 15:27:37.990937    5973 start.go:128] duration metric: took 2.371536875s to createHost
	I0912 15:27:37.991027    5973 start.go:83] releasing machines lock for "bridge-237000", held for 2.372033834s
	W0912 15:27:37.991421    5973 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-237000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-237000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:27:38.001132    5973 out.go:201] 
	W0912 15:27:38.012257    5973 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:27:38.012289    5973 out.go:270] * 
	* 
	W0912 15:27:38.014717    5973 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:27:38.028151    5973 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-237000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-237000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.852713416s)

                                                
                                                
-- stdout --
	* [kubenet-237000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-237000" primary control-plane node in "kubenet-237000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-237000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:27:40.214727    6082 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:27:40.214855    6082 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:27:40.214859    6082 out.go:358] Setting ErrFile to fd 2...
	I0912 15:27:40.214864    6082 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:27:40.215002    6082 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:27:40.216102    6082 out.go:352] Setting JSON to false
	I0912 15:27:40.232434    6082 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5224,"bootTime":1726174836,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:27:40.232513    6082 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:27:40.238343    6082 out.go:177] * [kubenet-237000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:27:40.247067    6082 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:27:40.247127    6082 notify.go:220] Checking for updates...
	I0912 15:27:40.254011    6082 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:27:40.256938    6082 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:27:40.260009    6082 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:27:40.262997    6082 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:27:40.265975    6082 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:27:40.274320    6082 config.go:182] Loaded profile config "multinode-323000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:27:40.274391    6082 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0912 15:27:40.274440    6082 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:27:40.278883    6082 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:27:40.285926    6082 start.go:297] selected driver: qemu2
	I0912 15:27:40.285931    6082 start.go:901] validating driver "qemu2" against <nil>
	I0912 15:27:40.285936    6082 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:27:40.288293    6082 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 15:27:40.290975    6082 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:27:40.294024    6082 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:27:40.294059    6082 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0912 15:27:40.294094    6082 start.go:340] cluster config:
	{Name:kubenet-237000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:27:40.297950    6082 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:27:40.302990    6082 out.go:177] * Starting "kubenet-237000" primary control-plane node in "kubenet-237000" cluster
	I0912 15:27:40.306992    6082 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 15:27:40.307013    6082 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 15:27:40.307024    6082 cache.go:56] Caching tarball of preloaded images
	I0912 15:27:40.307092    6082 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:27:40.307097    6082 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 15:27:40.307185    6082 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/kubenet-237000/config.json ...
	I0912 15:27:40.307197    6082 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/kubenet-237000/config.json: {Name:mk8835c8ca5e3f50994700033c4d9dbe21cc803c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:27:40.307629    6082 start.go:360] acquireMachinesLock for kubenet-237000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:27:40.307662    6082 start.go:364] duration metric: took 27.167µs to acquireMachinesLock for "kubenet-237000"
	I0912 15:27:40.307674    6082 start.go:93] Provisioning new machine with config: &{Name:kubenet-237000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:27:40.307711    6082 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:27:40.316977    6082 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:27:40.333636    6082 start.go:159] libmachine.API.Create for "kubenet-237000" (driver="qemu2")
	I0912 15:27:40.333664    6082 client.go:168] LocalClient.Create starting
	I0912 15:27:40.333729    6082 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:27:40.333759    6082 main.go:141] libmachine: Decoding PEM data...
	I0912 15:27:40.333768    6082 main.go:141] libmachine: Parsing certificate...
	I0912 15:27:40.333806    6082 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:27:40.333829    6082 main.go:141] libmachine: Decoding PEM data...
	I0912 15:27:40.333836    6082 main.go:141] libmachine: Parsing certificate...
	I0912 15:27:40.334347    6082 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:27:40.494235    6082 main.go:141] libmachine: Creating SSH key...
	I0912 15:27:40.557583    6082 main.go:141] libmachine: Creating Disk image...
	I0912 15:27:40.557590    6082 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:27:40.557788    6082 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubenet-237000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubenet-237000/disk.qcow2
	I0912 15:27:40.567484    6082 main.go:141] libmachine: STDOUT: 
	I0912 15:27:40.567504    6082 main.go:141] libmachine: STDERR: 
	I0912 15:27:40.567549    6082 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubenet-237000/disk.qcow2 +20000M
	I0912 15:27:40.575665    6082 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:27:40.575681    6082 main.go:141] libmachine: STDERR: 
	I0912 15:27:40.575702    6082 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubenet-237000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubenet-237000/disk.qcow2
	I0912 15:27:40.575707    6082 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:27:40.575723    6082 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:27:40.575765    6082 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubenet-237000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubenet-237000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubenet-237000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:e7:fe:72:09:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubenet-237000/disk.qcow2
	I0912 15:27:40.577487    6082 main.go:141] libmachine: STDOUT: 
	I0912 15:27:40.577505    6082 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:27:40.577526    6082 client.go:171] duration metric: took 243.862875ms to LocalClient.Create
	I0912 15:27:42.579708    6082 start.go:128] duration metric: took 2.272015417s to createHost
	I0912 15:27:42.579860    6082 start.go:83] releasing machines lock for "kubenet-237000", held for 2.272221s
	W0912 15:27:42.579909    6082 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:27:42.587292    6082 out.go:177] * Deleting "kubenet-237000" in qemu2 ...
	W0912 15:27:42.625244    6082 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:27:42.625273    6082 start.go:729] Will try again in 5 seconds ...
	I0912 15:27:47.627420    6082 start.go:360] acquireMachinesLock for kubenet-237000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:27:47.627957    6082 start.go:364] duration metric: took 444.042µs to acquireMachinesLock for "kubenet-237000"
	I0912 15:27:47.628099    6082 start.go:93] Provisioning new machine with config: &{Name:kubenet-237000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-237000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:27:47.628391    6082 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:27:47.637954    6082 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:27:47.690212    6082 start.go:159] libmachine.API.Create for "kubenet-237000" (driver="qemu2")
	I0912 15:27:47.690260    6082 client.go:168] LocalClient.Create starting
	I0912 15:27:47.690376    6082 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:27:47.690444    6082 main.go:141] libmachine: Decoding PEM data...
	I0912 15:27:47.690463    6082 main.go:141] libmachine: Parsing certificate...
	I0912 15:27:47.690533    6082 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:27:47.690590    6082 main.go:141] libmachine: Decoding PEM data...
	I0912 15:27:47.690603    6082 main.go:141] libmachine: Parsing certificate...
	I0912 15:27:47.691192    6082 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:27:47.861967    6082 main.go:141] libmachine: Creating SSH key...
	I0912 15:27:47.969292    6082 main.go:141] libmachine: Creating Disk image...
	I0912 15:27:47.969298    6082 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:27:47.969508    6082 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubenet-237000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubenet-237000/disk.qcow2
	I0912 15:27:47.978984    6082 main.go:141] libmachine: STDOUT: 
	I0912 15:27:47.979018    6082 main.go:141] libmachine: STDERR: 
	I0912 15:27:47.979075    6082 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubenet-237000/disk.qcow2 +20000M
	I0912 15:27:47.987147    6082 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:27:47.987169    6082 main.go:141] libmachine: STDERR: 
	I0912 15:27:47.987185    6082 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubenet-237000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubenet-237000/disk.qcow2
	I0912 15:27:47.987189    6082 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:27:47.987198    6082 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:27:47.987240    6082 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubenet-237000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubenet-237000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubenet-237000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:80:c0:0f:f0:1e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/kubenet-237000/disk.qcow2
	I0912 15:27:47.988885    6082 main.go:141] libmachine: STDOUT: 
	I0912 15:27:47.988906    6082 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:27:47.988918    6082 client.go:171] duration metric: took 298.660458ms to LocalClient.Create
	I0912 15:27:49.991092    6082 start.go:128] duration metric: took 2.362708125s to createHost
	I0912 15:27:49.991350    6082 start.go:83] releasing machines lock for "kubenet-237000", held for 2.363324s
	W0912 15:27:49.991701    6082 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-237000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-237000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:27:50.000363    6082 out.go:201] 
	W0912 15:27:50.011312    6082 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:27:50.011375    6082 out.go:270] * 
	* 
	W0912 15:27:50.014067    6082 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:27:50.026273    6082 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-196000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-196000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.711144375s)

                                                
                                                
-- stdout --
	* [old-k8s-version-196000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-196000" primary control-plane node in "old-k8s-version-196000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-196000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:27:52.206831    6201 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:27:52.206953    6201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:27:52.206956    6201 out.go:358] Setting ErrFile to fd 2...
	I0912 15:27:52.206971    6201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:27:52.207106    6201 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:27:52.208237    6201 out.go:352] Setting JSON to false
	I0912 15:27:52.224265    6201 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5236,"bootTime":1726174836,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:27:52.224332    6201 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:27:52.230464    6201 out.go:177] * [old-k8s-version-196000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:27:52.237150    6201 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:27:52.237265    6201 notify.go:220] Checking for updates...
	I0912 15:27:52.244108    6201 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:27:52.247137    6201 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:27:52.250134    6201 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:27:52.253146    6201 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:27:52.256188    6201 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:27:52.259544    6201 config.go:182] Loaded profile config "multinode-323000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:27:52.259608    6201 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0912 15:27:52.259655    6201 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:27:52.264115    6201 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:27:52.271210    6201 start.go:297] selected driver: qemu2
	I0912 15:27:52.271215    6201 start.go:901] validating driver "qemu2" against <nil>
	I0912 15:27:52.271222    6201 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:27:52.273429    6201 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 15:27:52.277123    6201 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:27:52.280202    6201 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:27:52.280251    6201 cni.go:84] Creating CNI manager for ""
	I0912 15:27:52.280259    6201 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0912 15:27:52.280315    6201 start.go:340] cluster config:
	{Name:old-k8s-version-196000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:27:52.283943    6201 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:27:52.291160    6201 out.go:177] * Starting "old-k8s-version-196000" primary control-plane node in "old-k8s-version-196000" cluster
	I0912 15:27:52.295193    6201 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0912 15:27:52.295210    6201 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0912 15:27:52.295220    6201 cache.go:56] Caching tarball of preloaded images
	I0912 15:27:52.295279    6201 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:27:52.295285    6201 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0912 15:27:52.295340    6201 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/old-k8s-version-196000/config.json ...
	I0912 15:27:52.295357    6201 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/old-k8s-version-196000/config.json: {Name:mkd1ef39e08be59ab205cae9bd89205962587fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:27:52.295564    6201 start.go:360] acquireMachinesLock for old-k8s-version-196000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:27:52.295595    6201 start.go:364] duration metric: took 25.667µs to acquireMachinesLock for "old-k8s-version-196000"
	I0912 15:27:52.295607    6201 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-196000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:27:52.295632    6201 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:27:52.303118    6201 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 15:27:52.319912    6201 start.go:159] libmachine.API.Create for "old-k8s-version-196000" (driver="qemu2")
	I0912 15:27:52.319944    6201 client.go:168] LocalClient.Create starting
	I0912 15:27:52.320006    6201 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:27:52.320039    6201 main.go:141] libmachine: Decoding PEM data...
	I0912 15:27:52.320048    6201 main.go:141] libmachine: Parsing certificate...
	I0912 15:27:52.320085    6201 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:27:52.320114    6201 main.go:141] libmachine: Decoding PEM data...
	I0912 15:27:52.320121    6201 main.go:141] libmachine: Parsing certificate...
	I0912 15:27:52.320467    6201 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:27:52.480764    6201 main.go:141] libmachine: Creating SSH key...
	I0912 15:27:52.530098    6201 main.go:141] libmachine: Creating Disk image...
	I0912 15:27:52.530109    6201 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:27:52.530337    6201 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/old-k8s-version-196000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/old-k8s-version-196000/disk.qcow2
	I0912 15:27:52.539755    6201 main.go:141] libmachine: STDOUT: 
	I0912 15:27:52.539774    6201 main.go:141] libmachine: STDERR: 
	I0912 15:27:52.539826    6201 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/old-k8s-version-196000/disk.qcow2 +20000M
	I0912 15:27:52.547873    6201 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:27:52.547889    6201 main.go:141] libmachine: STDERR: 
	I0912 15:27:52.547907    6201 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/old-k8s-version-196000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/old-k8s-version-196000/disk.qcow2
	I0912 15:27:52.547912    6201 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:27:52.547925    6201 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:27:52.547952    6201 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/old-k8s-version-196000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/old-k8s-version-196000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/old-k8s-version-196000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:fb:b6:6f:e7:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/old-k8s-version-196000/disk.qcow2
	I0912 15:27:52.549700    6201 main.go:141] libmachine: STDOUT: 
	I0912 15:27:52.549719    6201 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:27:52.549738    6201 client.go:171] duration metric: took 229.794625ms to LocalClient.Create
	I0912 15:27:54.551797    6201 start.go:128] duration metric: took 2.256199459s to createHost
	I0912 15:27:54.551870    6201 start.go:83] releasing machines lock for "old-k8s-version-196000", held for 2.256319416s
	W0912 15:27:54.551896    6201 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:27:54.561351    6201 out.go:177] * Deleting "old-k8s-version-196000" in qemu2 ...
	W0912 15:27:54.585571    6201 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:27:54.585582    6201 start.go:729] Will try again in 5 seconds ...
	I0912 15:27:59.587646    6201 start.go:360] acquireMachinesLock for old-k8s-version-196000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:27:59.587997    6201 start.go:364] duration metric: took 267.042µs to acquireMachinesLock for "old-k8s-version-196000"
	I0912 15:27:59.588070    6201 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-196000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:27:59.588223    6201 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:27:59.596533    6201 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 15:27:59.634455    6201 start.go:159] libmachine.API.Create for "old-k8s-version-196000" (driver="qemu2")
	I0912 15:27:59.634509    6201 client.go:168] LocalClient.Create starting
	I0912 15:27:59.634629    6201 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:27:59.634687    6201 main.go:141] libmachine: Decoding PEM data...
	I0912 15:27:59.634707    6201 main.go:141] libmachine: Parsing certificate...
	I0912 15:27:59.634761    6201 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:27:59.634802    6201 main.go:141] libmachine: Decoding PEM data...
	I0912 15:27:59.634823    6201 main.go:141] libmachine: Parsing certificate...
	I0912 15:27:59.635288    6201 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:27:59.799616    6201 main.go:141] libmachine: Creating SSH key...
	I0912 15:27:59.833817    6201 main.go:141] libmachine: Creating Disk image...
	I0912 15:27:59.833823    6201 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:27:59.834032    6201 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/old-k8s-version-196000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/old-k8s-version-196000/disk.qcow2
	I0912 15:27:59.843481    6201 main.go:141] libmachine: STDOUT: 
	I0912 15:27:59.843503    6201 main.go:141] libmachine: STDERR: 
	I0912 15:27:59.843554    6201 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/old-k8s-version-196000/disk.qcow2 +20000M
	I0912 15:27:59.851432    6201 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:27:59.851449    6201 main.go:141] libmachine: STDERR: 
	I0912 15:27:59.851460    6201 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/old-k8s-version-196000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/old-k8s-version-196000/disk.qcow2
	I0912 15:27:59.851464    6201 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:27:59.851475    6201 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:27:59.851502    6201 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/old-k8s-version-196000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/old-k8s-version-196000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/old-k8s-version-196000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:61:c2:4e:6f:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/old-k8s-version-196000/disk.qcow2
	I0912 15:27:59.853178    6201 main.go:141] libmachine: STDOUT: 
	I0912 15:27:59.853196    6201 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:27:59.853208    6201 client.go:171] duration metric: took 218.699709ms to LocalClient.Create
	I0912 15:28:01.855251    6201 start.go:128] duration metric: took 2.267060417s to createHost
	I0912 15:28:01.855293    6201 start.go:83] releasing machines lock for "old-k8s-version-196000", held for 2.267334416s
	W0912 15:28:01.855427    6201 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:28:01.859823    6201 out.go:201] 
	W0912 15:28:01.863768    6201 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:28:01.863776    6201 out.go:270] * 
	* 
	W0912 15:28:01.864374    6201 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:28:01.878715    6201 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-196000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-196000 -n old-k8s-version-196000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-196000 -n old-k8s-version-196000: exit status 7 (34.705792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-196000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-196000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-196000 create -f testdata/busybox.yaml: exit status 1 (27.161ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-196000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-196000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-196000 -n old-k8s-version-196000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-196000 -n old-k8s-version-196000: exit status 7 (30.190375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-196000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-196000 -n old-k8s-version-196000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-196000 -n old-k8s-version-196000: exit status 7 (29.795834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-196000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-196000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-196000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-196000 describe deploy/metrics-server -n kube-system: exit status 1 (26.916167ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-196000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-196000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-196000 -n old-k8s-version-196000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-196000 -n old-k8s-version-196000: exit status 7 (29.211333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-196000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-196000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-196000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.193268792s)

                                                
                                                
-- stdout --
	* [old-k8s-version-196000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-196000" primary control-plane node in "old-k8s-version-196000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-196000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-196000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:28:04.324445    6248 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:28:04.324578    6248 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:28:04.324582    6248 out.go:358] Setting ErrFile to fd 2...
	I0912 15:28:04.324584    6248 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:28:04.324707    6248 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:28:04.325739    6248 out.go:352] Setting JSON to false
	I0912 15:28:04.341947    6248 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5248,"bootTime":1726174836,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:28:04.342028    6248 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:28:04.350183    6248 out.go:177] * [old-k8s-version-196000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:28:04.357120    6248 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:28:04.357183    6248 notify.go:220] Checking for updates...
	I0912 15:28:04.364176    6248 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:28:04.367170    6248 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:28:04.370137    6248 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:28:04.373160    6248 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:28:04.376098    6248 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:28:04.379431    6248 config.go:182] Loaded profile config "old-k8s-version-196000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0912 15:28:04.383099    6248 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0912 15:28:04.386087    6248 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:28:04.390126    6248 out.go:177] * Using the qemu2 driver based on existing profile
	I0912 15:28:04.397105    6248 start.go:297] selected driver: qemu2
	I0912 15:28:04.397112    6248 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-196000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:28:04.397177    6248 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:28:04.399543    6248 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:28:04.399570    6248 cni.go:84] Creating CNI manager for ""
	I0912 15:28:04.399577    6248 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0912 15:28:04.399604    6248 start.go:340] cluster config:
	{Name:old-k8s-version-196000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-196000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:28:04.403158    6248 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:28:04.410153    6248 out.go:177] * Starting "old-k8s-version-196000" primary control-plane node in "old-k8s-version-196000" cluster
	I0912 15:28:04.414129    6248 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0912 15:28:04.414143    6248 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0912 15:28:04.414155    6248 cache.go:56] Caching tarball of preloaded images
	I0912 15:28:04.414232    6248 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:28:04.414237    6248 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0912 15:28:04.414305    6248 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/old-k8s-version-196000/config.json ...
	I0912 15:28:04.414837    6248 start.go:360] acquireMachinesLock for old-k8s-version-196000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:28:04.414864    6248 start.go:364] duration metric: took 21.041µs to acquireMachinesLock for "old-k8s-version-196000"
	I0912 15:28:04.414874    6248 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:28:04.414878    6248 fix.go:54] fixHost starting: 
	I0912 15:28:04.414991    6248 fix.go:112] recreateIfNeeded on old-k8s-version-196000: state=Stopped err=<nil>
	W0912 15:28:04.414999    6248 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 15:28:04.418110    6248 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-196000" ...
	I0912 15:28:04.426141    6248 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:28:04.426174    6248 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/old-k8s-version-196000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/old-k8s-version-196000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/old-k8s-version-196000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:61:c2:4e:6f:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/old-k8s-version-196000/disk.qcow2
	I0912 15:28:04.428241    6248 main.go:141] libmachine: STDOUT: 
	I0912 15:28:04.428260    6248 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:28:04.428283    6248 fix.go:56] duration metric: took 13.405042ms for fixHost
	I0912 15:28:04.428286    6248 start.go:83] releasing machines lock for "old-k8s-version-196000", held for 13.418166ms
	W0912 15:28:04.428294    6248 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:28:04.428325    6248 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:28:04.428329    6248 start.go:729] Will try again in 5 seconds ...
	I0912 15:28:09.430382    6248 start.go:360] acquireMachinesLock for old-k8s-version-196000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:28:09.430935    6248 start.go:364] duration metric: took 457.541µs to acquireMachinesLock for "old-k8s-version-196000"
	I0912 15:28:09.431087    6248 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:28:09.431111    6248 fix.go:54] fixHost starting: 
	I0912 15:28:09.431870    6248 fix.go:112] recreateIfNeeded on old-k8s-version-196000: state=Stopped err=<nil>
	W0912 15:28:09.431896    6248 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 15:28:09.440625    6248 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-196000" ...
	I0912 15:28:09.444506    6248 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:28:09.444695    6248 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/old-k8s-version-196000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/old-k8s-version-196000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/old-k8s-version-196000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:61:c2:4e:6f:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/old-k8s-version-196000/disk.qcow2
	I0912 15:28:09.454674    6248 main.go:141] libmachine: STDOUT: 
	I0912 15:28:09.454741    6248 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:28:09.454836    6248 fix.go:56] duration metric: took 23.728792ms for fixHost
	I0912 15:28:09.454852    6248 start.go:83] releasing machines lock for "old-k8s-version-196000", held for 23.893583ms
	W0912 15:28:09.455016    6248 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-196000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-196000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:28:09.463668    6248 out.go:201] 
	W0912 15:28:09.467760    6248 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:28:09.467792    6248 out.go:270] * 
	* 
	W0912 15:28:09.470533    6248 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:28:09.478573    6248 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-196000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-196000 -n old-k8s-version-196000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-196000 -n old-k8s-version-196000: exit status 7 (58.434ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-196000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-196000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-196000 -n old-k8s-version-196000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-196000 -n old-k8s-version-196000: exit status 7 (30.776584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-196000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-196000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-196000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-196000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.132583ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-196000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-196000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-196000 -n old-k8s-version-196000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-196000 -n old-k8s-version-196000: exit status 7 (29.912791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-196000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-196000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-196000 -n old-k8s-version-196000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-196000 -n old-k8s-version-196000: exit status 7 (28.806875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-196000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-196000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-196000 --alsologtostderr -v=1: exit status 83 (42.715ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-196000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-196000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:28:09.738868    6270 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:28:09.739250    6270 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:28:09.739254    6270 out.go:358] Setting ErrFile to fd 2...
	I0912 15:28:09.739256    6270 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:28:09.739427    6270 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:28:09.739632    6270 out.go:352] Setting JSON to false
	I0912 15:28:09.739642    6270 mustload.go:65] Loading cluster: old-k8s-version-196000
	I0912 15:28:09.739832    6270 config.go:182] Loaded profile config "old-k8s-version-196000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0912 15:28:09.743276    6270 out.go:177] * The control-plane node old-k8s-version-196000 host is not running: state=Stopped
	I0912 15:28:09.746260    6270 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-196000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-196000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-196000 -n old-k8s-version-196000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-196000 -n old-k8s-version-196000: exit status 7 (28.862ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-196000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-196000 -n old-k8s-version-196000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-196000 -n old-k8s-version-196000: exit status 7 (29.58775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-196000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-558000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-558000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (10.002358125s)

                                                
                                                
-- stdout --
	* [no-preload-558000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-558000" primary control-plane node in "no-preload-558000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-558000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:28:10.061051    6287 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:28:10.061167    6287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:28:10.061172    6287 out.go:358] Setting ErrFile to fd 2...
	I0912 15:28:10.061175    6287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:28:10.061309    6287 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:28:10.062390    6287 out.go:352] Setting JSON to false
	I0912 15:28:10.078909    6287 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5254,"bootTime":1726174836,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:28:10.078987    6287 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:28:10.082610    6287 out.go:177] * [no-preload-558000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:28:10.090510    6287 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:28:10.090547    6287 notify.go:220] Checking for updates...
	I0912 15:28:10.097486    6287 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:28:10.100525    6287 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:28:10.103526    6287 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:28:10.105011    6287 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:28:10.108490    6287 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:28:10.111772    6287 config.go:182] Loaded profile config "multinode-323000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:28:10.111836    6287 config.go:182] Loaded profile config "stopped-upgrade-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0912 15:28:10.111879    6287 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:28:10.116359    6287 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:28:10.123516    6287 start.go:297] selected driver: qemu2
	I0912 15:28:10.123525    6287 start.go:901] validating driver "qemu2" against <nil>
	I0912 15:28:10.123531    6287 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:28:10.125807    6287 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 15:28:10.128545    6287 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:28:10.131609    6287 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:28:10.131633    6287 cni.go:84] Creating CNI manager for ""
	I0912 15:28:10.131640    6287 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:28:10.131644    6287 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 15:28:10.131679    6287 start.go:340] cluster config:
	{Name:no-preload-558000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-558000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:28:10.135355    6287 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:28:10.141507    6287 out.go:177] * Starting "no-preload-558000" primary control-plane node in "no-preload-558000" cluster
	I0912 15:28:10.145506    6287 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 15:28:10.145590    6287 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/no-preload-558000/config.json ...
	I0912 15:28:10.145611    6287 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/no-preload-558000/config.json: {Name:mkc74d2b47ff0d3baf77503dc42594b53b694c0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:28:10.145611    6287 cache.go:107] acquiring lock: {Name:mkb2a64d3e3719cf8754386c1b8c2a886238e6a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:28:10.145632    6287 cache.go:107] acquiring lock: {Name:mkd529cdbee2f60eda17c12cbf4e462479ecfdf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:28:10.145656    6287 cache.go:107] acquiring lock: {Name:mkfe2304d537ba483d3e534aa7e738da6eb0c8f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:28:10.145700    6287 cache.go:115] /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0912 15:28:10.145716    6287 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 108.416µs
	I0912 15:28:10.145724    6287 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0912 15:28:10.145735    6287 cache.go:107] acquiring lock: {Name:mkbe2d8ad392940baeeb47f8bfd43e23fd508e27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:28:10.145779    6287 cache.go:107] acquiring lock: {Name:mkb1ef8617b3eb43cb31b374de479cb4ab6ed8f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:28:10.145788    6287 cache.go:107] acquiring lock: {Name:mk130315e7babf2009c66d827e5ea8f2e7e3b929 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:28:10.145793    6287 cache.go:107] acquiring lock: {Name:mk4e8510d5d5efabb4aa841ac979aa83dc632b25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:28:10.145813    6287 cache.go:107] acquiring lock: {Name:mk8960d3fb9a7d2712804e60fad3e390a585dc81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:28:10.146138    6287 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 15:28:10.146174    6287 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0912 15:28:10.146247    6287 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0912 15:28:10.146256    6287 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0912 15:28:10.146264    6287 start.go:360] acquireMachinesLock for no-preload-558000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:28:10.146271    6287 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0912 15:28:10.146283    6287 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0912 15:28:10.146296    6287 start.go:364] duration metric: took 27.125µs to acquireMachinesLock for "no-preload-558000"
	I0912 15:28:10.146256    6287 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0912 15:28:10.146309    6287 start.go:93] Provisioning new machine with config: &{Name:no-preload-558000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-558000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:28:10.146336    6287 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:28:10.149542    6287 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 15:28:10.155960    6287 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0912 15:28:10.157805    6287 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0912 15:28:10.157859    6287 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 15:28:10.157899    6287 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0912 15:28:10.158213    6287 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0912 15:28:10.158234    6287 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0912 15:28:10.158320    6287 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0912 15:28:10.166699    6287 start.go:159] libmachine.API.Create for "no-preload-558000" (driver="qemu2")
	I0912 15:28:10.166719    6287 client.go:168] LocalClient.Create starting
	I0912 15:28:10.166805    6287 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:28:10.166835    6287 main.go:141] libmachine: Decoding PEM data...
	I0912 15:28:10.166845    6287 main.go:141] libmachine: Parsing certificate...
	I0912 15:28:10.166884    6287 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:28:10.166908    6287 main.go:141] libmachine: Decoding PEM data...
	I0912 15:28:10.166922    6287 main.go:141] libmachine: Parsing certificate...
	I0912 15:28:10.167249    6287 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:28:10.332298    6287 main.go:141] libmachine: Creating SSH key...
	I0912 15:28:10.523015    6287 cache.go:162] opening:  /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I0912 15:28:10.525624    6287 main.go:141] libmachine: Creating Disk image...
	I0912 15:28:10.525634    6287 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:28:10.525868    6287 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/no-preload-558000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/no-preload-558000/disk.qcow2
	I0912 15:28:10.535441    6287 main.go:141] libmachine: STDOUT: 
	I0912 15:28:10.535460    6287 main.go:141] libmachine: STDERR: 
	I0912 15:28:10.535509    6287 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/no-preload-558000/disk.qcow2 +20000M
	I0912 15:28:10.543889    6287 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:28:10.543904    6287 main.go:141] libmachine: STDERR: 
	I0912 15:28:10.543912    6287 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/no-preload-558000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/no-preload-558000/disk.qcow2
	I0912 15:28:10.543917    6287 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:28:10.543928    6287 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:28:10.543954    6287 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/no-preload-558000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/no-preload-558000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/no-preload-558000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:4e:3b:5c:a2:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/no-preload-558000/disk.qcow2
	I0912 15:28:10.545777    6287 main.go:141] libmachine: STDOUT: 
	I0912 15:28:10.545795    6287 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:28:10.545813    6287 client.go:171] duration metric: took 379.098167ms to LocalClient.Create
	I0912 15:28:10.555916    6287 cache.go:162] opening:  /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0912 15:28:10.557018    6287 cache.go:162] opening:  /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0912 15:28:10.562843    6287 cache.go:162] opening:  /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I0912 15:28:10.596890    6287 cache.go:162] opening:  /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0912 15:28:10.645655    6287 cache.go:162] opening:  /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I0912 15:28:10.653032    6287 cache.go:162] opening:  /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I0912 15:28:10.788775    6287 cache.go:157] /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0912 15:28:10.788789    6287 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 643.036125ms
	I0912 15:28:10.788795    6287 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0912 15:28:12.545942    6287 start.go:128] duration metric: took 2.399645959s to createHost
	I0912 15:28:12.545979    6287 start.go:83] releasing machines lock for "no-preload-558000", held for 2.399731709s
	W0912 15:28:12.546019    6287 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:28:12.563312    6287 out.go:177] * Deleting "no-preload-558000" in qemu2 ...
	W0912 15:28:12.582671    6287 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:28:12.582682    6287 start.go:729] Will try again in 5 seconds ...
	I0912 15:28:13.290812    6287 cache.go:157] /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0912 15:28:13.290847    6287 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 3.145147375s
	I0912 15:28:13.290862    6287 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0912 15:28:13.358924    6287 cache.go:157] /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0912 15:28:13.358943    6287 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 3.213405458s
	I0912 15:28:13.358954    6287 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0912 15:28:13.526849    6287 cache.go:157] /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0912 15:28:13.526874    6287 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 3.381192s
	I0912 15:28:13.526889    6287 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0912 15:28:14.775518    6287 cache.go:157] /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0912 15:28:14.775554    6287 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 4.629921083s
	I0912 15:28:14.775573    6287 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0912 15:28:15.034094    6287 cache.go:157] /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0912 15:28:15.034144    6287 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 4.888621667s
	I0912 15:28:15.034175    6287 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0912 15:28:17.584708    6287 start.go:360] acquireMachinesLock for no-preload-558000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:28:17.585113    6287 start.go:364] duration metric: took 343.125µs to acquireMachinesLock for "no-preload-558000"
	I0912 15:28:17.585256    6287 start.go:93] Provisioning new machine with config: &{Name:no-preload-558000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-558000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:28:17.585456    6287 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:28:17.594993    6287 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 15:28:17.633626    6287 start.go:159] libmachine.API.Create for "no-preload-558000" (driver="qemu2")
	I0912 15:28:17.633675    6287 client.go:168] LocalClient.Create starting
	I0912 15:28:17.633784    6287 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:28:17.633847    6287 main.go:141] libmachine: Decoding PEM data...
	I0912 15:28:17.633862    6287 main.go:141] libmachine: Parsing certificate...
	I0912 15:28:17.633929    6287 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:28:17.633969    6287 main.go:141] libmachine: Decoding PEM data...
	I0912 15:28:17.633986    6287 main.go:141] libmachine: Parsing certificate...
	I0912 15:28:17.634438    6287 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:28:17.804309    6287 main.go:141] libmachine: Creating SSH key...
	I0912 15:28:17.970745    6287 main.go:141] libmachine: Creating Disk image...
	I0912 15:28:17.970752    6287 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:28:17.970960    6287 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/no-preload-558000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/no-preload-558000/disk.qcow2
	I0912 15:28:17.980631    6287 main.go:141] libmachine: STDOUT: 
	I0912 15:28:17.980649    6287 main.go:141] libmachine: STDERR: 
	I0912 15:28:17.980694    6287 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/no-preload-558000/disk.qcow2 +20000M
	I0912 15:28:17.989305    6287 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:28:17.989325    6287 main.go:141] libmachine: STDERR: 
	I0912 15:28:17.989340    6287 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/no-preload-558000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/no-preload-558000/disk.qcow2
	I0912 15:28:17.989346    6287 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:28:17.989361    6287 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:28:17.989409    6287 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/no-preload-558000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/no-preload-558000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/no-preload-558000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:35:27:4e:fc:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/no-preload-558000/disk.qcow2
	I0912 15:28:17.991191    6287 main.go:141] libmachine: STDOUT: 
	I0912 15:28:17.991209    6287 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:28:17.991233    6287 client.go:171] duration metric: took 357.550375ms to LocalClient.Create
	I0912 15:28:18.458832    6287 cache.go:157] /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0912 15:28:18.458897    6287 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.31328525s
	I0912 15:28:18.458925    6287 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0912 15:28:18.458985    6287 cache.go:87] Successfully saved all images to host disk.
	I0912 15:28:19.993497    6287 start.go:128] duration metric: took 2.408041291s to createHost
	I0912 15:28:19.993596    6287 start.go:83] releasing machines lock for "no-preload-558000", held for 2.408514708s
	W0912 15:28:19.993948    6287 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-558000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-558000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:28:20.007467    6287 out.go:201] 
	W0912 15:28:20.012572    6287 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:28:20.012611    6287 out.go:270] * 
	* 
	W0912 15:28:20.015111    6287 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:28:20.023430    6287 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-558000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000: exit status 7 (50.249ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-558000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (11.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-818000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-818000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (11.080580625s)

                                                
                                                
-- stdout --
	* [embed-certs-818000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-818000" primary control-plane node in "embed-certs-818000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-818000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:28:18.795933    6332 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:28:18.796071    6332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:28:18.796074    6332 out.go:358] Setting ErrFile to fd 2...
	I0912 15:28:18.796076    6332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:28:18.796187    6332 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:28:18.797246    6332 out.go:352] Setting JSON to false
	I0912 15:28:18.813269    6332 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5262,"bootTime":1726174836,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:28:18.813357    6332 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:28:18.817751    6332 out.go:177] * [embed-certs-818000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:28:18.824781    6332 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:28:18.824795    6332 notify.go:220] Checking for updates...
	I0912 15:28:18.831746    6332 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:28:18.834742    6332 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:28:18.837690    6332 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:28:18.840690    6332 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:28:18.843770    6332 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:28:18.847035    6332 config.go:182] Loaded profile config "multinode-323000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:28:18.847109    6332 config.go:182] Loaded profile config "no-preload-558000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:28:18.847157    6332 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:28:18.851691    6332 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:28:18.858740    6332 start.go:297] selected driver: qemu2
	I0912 15:28:18.858747    6332 start.go:901] validating driver "qemu2" against <nil>
	I0912 15:28:18.858756    6332 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:28:18.861193    6332 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 15:28:18.863722    6332 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:28:18.866874    6332 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:28:18.866895    6332 cni.go:84] Creating CNI manager for ""
	I0912 15:28:18.866903    6332 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:28:18.866909    6332 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 15:28:18.866931    6332 start.go:340] cluster config:
	{Name:embed-certs-818000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-818000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:28:18.870897    6332 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:28:18.878701    6332 out.go:177] * Starting "embed-certs-818000" primary control-plane node in "embed-certs-818000" cluster
	I0912 15:28:18.882593    6332 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 15:28:18.882608    6332 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 15:28:18.882621    6332 cache.go:56] Caching tarball of preloaded images
	I0912 15:28:18.882689    6332 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:28:18.882695    6332 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 15:28:18.882764    6332 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/embed-certs-818000/config.json ...
	I0912 15:28:18.882779    6332 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/embed-certs-818000/config.json: {Name:mkc4db44e5c631daa2dbcc82b5f1ccb8ca4f6eb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:28:18.883207    6332 start.go:360] acquireMachinesLock for embed-certs-818000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:28:19.993735    6332 start.go:364] duration metric: took 1.11052925s to acquireMachinesLock for "embed-certs-818000"
	I0912 15:28:19.993934    6332 start.go:93] Provisioning new machine with config: &{Name:embed-certs-818000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-818000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:28:19.994152    6332 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:28:20.003452    6332 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 15:28:20.054938    6332 start.go:159] libmachine.API.Create for "embed-certs-818000" (driver="qemu2")
	I0912 15:28:20.054979    6332 client.go:168] LocalClient.Create starting
	I0912 15:28:20.055132    6332 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:28:20.055194    6332 main.go:141] libmachine: Decoding PEM data...
	I0912 15:28:20.055220    6332 main.go:141] libmachine: Parsing certificate...
	I0912 15:28:20.055278    6332 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:28:20.055325    6332 main.go:141] libmachine: Decoding PEM data...
	I0912 15:28:20.055347    6332 main.go:141] libmachine: Parsing certificate...
	I0912 15:28:20.055955    6332 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:28:20.251910    6332 main.go:141] libmachine: Creating SSH key...
	I0912 15:28:20.348721    6332 main.go:141] libmachine: Creating Disk image...
	I0912 15:28:20.348731    6332 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:28:20.348955    6332 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/embed-certs-818000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/embed-certs-818000/disk.qcow2
	I0912 15:28:20.358965    6332 main.go:141] libmachine: STDOUT: 
	I0912 15:28:20.358997    6332 main.go:141] libmachine: STDERR: 
	I0912 15:28:20.359056    6332 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/embed-certs-818000/disk.qcow2 +20000M
	I0912 15:28:20.376660    6332 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:28:20.376686    6332 main.go:141] libmachine: STDERR: 
	I0912 15:28:20.376698    6332 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/embed-certs-818000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/embed-certs-818000/disk.qcow2
	I0912 15:28:20.376703    6332 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:28:20.376713    6332 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:28:20.376740    6332 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/embed-certs-818000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/embed-certs-818000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/embed-certs-818000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:22:7b:a9:ed:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/embed-certs-818000/disk.qcow2
	I0912 15:28:20.378503    6332 main.go:141] libmachine: STDOUT: 
	I0912 15:28:20.378520    6332 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:28:20.378539    6332 client.go:171] duration metric: took 323.56325ms to LocalClient.Create
	I0912 15:28:22.380651    6332 start.go:128] duration metric: took 2.386526667s to createHost
	I0912 15:28:22.380767    6332 start.go:83] releasing machines lock for "embed-certs-818000", held for 2.387048708s
	W0912 15:28:22.380812    6332 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:28:22.390910    6332 out.go:177] * Deleting "embed-certs-818000" in qemu2 ...
	W0912 15:28:22.418768    6332 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:28:22.418791    6332 start.go:729] Will try again in 5 seconds ...
	I0912 15:28:27.421068    6332 start.go:360] acquireMachinesLock for embed-certs-818000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:28:27.421567    6332 start.go:364] duration metric: took 390.917µs to acquireMachinesLock for "embed-certs-818000"
	I0912 15:28:27.421711    6332 start.go:93] Provisioning new machine with config: &{Name:embed-certs-818000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-818000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:28:27.422013    6332 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:28:27.431733    6332 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 15:28:27.484930    6332 start.go:159] libmachine.API.Create for "embed-certs-818000" (driver="qemu2")
	I0912 15:28:27.484979    6332 client.go:168] LocalClient.Create starting
	I0912 15:28:27.485098    6332 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:28:27.485165    6332 main.go:141] libmachine: Decoding PEM data...
	I0912 15:28:27.485182    6332 main.go:141] libmachine: Parsing certificate...
	I0912 15:28:27.485249    6332 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:28:27.485298    6332 main.go:141] libmachine: Decoding PEM data...
	I0912 15:28:27.485313    6332 main.go:141] libmachine: Parsing certificate...
	I0912 15:28:27.485810    6332 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:28:27.664137    6332 main.go:141] libmachine: Creating SSH key...
	I0912 15:28:27.767895    6332 main.go:141] libmachine: Creating Disk image...
	I0912 15:28:27.767903    6332 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:28:27.768131    6332 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/embed-certs-818000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/embed-certs-818000/disk.qcow2
	I0912 15:28:27.777381    6332 main.go:141] libmachine: STDOUT: 
	I0912 15:28:27.777402    6332 main.go:141] libmachine: STDERR: 
	I0912 15:28:27.777463    6332 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/embed-certs-818000/disk.qcow2 +20000M
	I0912 15:28:27.785323    6332 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:28:27.785339    6332 main.go:141] libmachine: STDERR: 
	I0912 15:28:27.785351    6332 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/embed-certs-818000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/embed-certs-818000/disk.qcow2
	I0912 15:28:27.785356    6332 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:28:27.785375    6332 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:28:27.785402    6332 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/embed-certs-818000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/embed-certs-818000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/embed-certs-818000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:54:9c:a8:47:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/embed-certs-818000/disk.qcow2
	I0912 15:28:27.786988    6332 main.go:141] libmachine: STDOUT: 
	I0912 15:28:27.787003    6332 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:28:27.787015    6332 client.go:171] duration metric: took 302.03775ms to LocalClient.Create
	I0912 15:28:29.789181    6332 start.go:128] duration metric: took 2.367185333s to createHost
	I0912 15:28:29.789248    6332 start.go:83] releasing machines lock for "embed-certs-818000", held for 2.367709042s
	W0912 15:28:29.789597    6332 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-818000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-818000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:28:29.804218    6332 out.go:201] 
	W0912 15:28:29.808238    6332 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:28:29.808264    6332 out.go:270] * 
	* 
	W0912 15:28:29.811040    6332 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:28:29.821179    6332 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-818000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-818000 -n embed-certs-818000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-818000 -n embed-certs-818000: exit status 7 (68.097834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-818000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (11.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-558000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-558000 create -f testdata/busybox.yaml: exit status 1 (30.671125ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-558000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-558000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000: exit status 7 (34.216958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-558000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000: exit status 7 (33.864125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-558000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-558000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-558000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-558000 describe deploy/metrics-server -n kube-system: exit status 1 (27.17775ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-558000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-558000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000: exit status 7 (29.632459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-558000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (7.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-558000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-558000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (7.320441875s)

                                                
                                                
-- stdout --
	* [no-preload-558000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-558000" primary control-plane node in "no-preload-558000" cluster
	* Restarting existing qemu2 VM for "no-preload-558000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-558000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:28:22.572740    6370 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:28:22.572861    6370 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:28:22.572864    6370 out.go:358] Setting ErrFile to fd 2...
	I0912 15:28:22.572866    6370 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:28:22.572993    6370 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:28:22.574021    6370 out.go:352] Setting JSON to false
	I0912 15:28:22.590292    6370 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5266,"bootTime":1726174836,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:28:22.590357    6370 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:28:22.593943    6370 out.go:177] * [no-preload-558000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:28:22.600891    6370 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:28:22.600914    6370 notify.go:220] Checking for updates...
	I0912 15:28:22.607980    6370 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:28:22.610902    6370 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:28:22.613895    6370 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:28:22.616894    6370 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:28:22.619800    6370 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:28:22.623190    6370 config.go:182] Loaded profile config "no-preload-558000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:28:22.623442    6370 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:28:22.627850    6370 out.go:177] * Using the qemu2 driver based on existing profile
	I0912 15:28:22.634897    6370 start.go:297] selected driver: qemu2
	I0912 15:28:22.634902    6370 start.go:901] validating driver "qemu2" against &{Name:no-preload-558000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:no-preload-558000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:28:22.634952    6370 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:28:22.637267    6370 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:28:22.637315    6370 cni.go:84] Creating CNI manager for ""
	I0912 15:28:22.637322    6370 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:28:22.637348    6370 start.go:340] cluster config:
	{Name:no-preload-558000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-558000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:28:22.640884    6370 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:28:22.652884    6370 out.go:177] * Starting "no-preload-558000" primary control-plane node in "no-preload-558000" cluster
	I0912 15:28:22.656883    6370 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 15:28:22.656962    6370 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/no-preload-558000/config.json ...
	I0912 15:28:22.656959    6370 cache.go:107] acquiring lock: {Name:mkfe2304d537ba483d3e534aa7e738da6eb0c8f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:28:22.656977    6370 cache.go:107] acquiring lock: {Name:mk130315e7babf2009c66d827e5ea8f2e7e3b929 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:28:22.656980    6370 cache.go:107] acquiring lock: {Name:mkbe2d8ad392940baeeb47f8bfd43e23fd508e27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:28:22.657041    6370 cache.go:115] /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0912 15:28:22.657041    6370 cache.go:115] /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0912 15:28:22.657047    6370 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 69.959µs
	I0912 15:28:22.657050    6370 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 94.791µs
	I0912 15:28:22.657055    6370 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0912 15:28:22.657049    6370 cache.go:115] /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0912 15:28:22.657060    6370 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 90.25µs
	I0912 15:28:22.657063    6370 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0912 15:28:22.657059    6370 cache.go:107] acquiring lock: {Name:mkb1ef8617b3eb43cb31b374de479cb4ab6ed8f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:28:22.657055    6370 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0912 15:28:22.656960    6370 cache.go:107] acquiring lock: {Name:mkb2a64d3e3719cf8754386c1b8c2a886238e6a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:28:22.657118    6370 cache.go:115] /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0912 15:28:22.657123    6370 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 64.291µs
	I0912 15:28:22.657062    6370 cache.go:107] acquiring lock: {Name:mk4e8510d5d5efabb4aa841ac979aa83dc632b25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:28:22.657069    6370 cache.go:107] acquiring lock: {Name:mkd529cdbee2f60eda17c12cbf4e462479ecfdf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:28:22.657133    6370 cache.go:107] acquiring lock: {Name:mk8960d3fb9a7d2712804e60fad3e390a585dc81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:28:22.657128    6370 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0912 15:28:22.657183    6370 cache.go:115] /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0912 15:28:22.657191    6370 cache.go:115] /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0912 15:28:22.657191    6370 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 232.833µs
	I0912 15:28:22.657200    6370 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0912 15:28:22.657195    6370 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 126.458µs
	I0912 15:28:22.657202    6370 cache.go:115] /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0912 15:28:22.657185    6370 cache.go:115] /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0912 15:28:22.657208    6370 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 98.542µs
	I0912 15:28:22.657204    6370 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0912 15:28:22.657210    6370 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 148.833µs
	I0912 15:28:22.657246    6370 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0912 15:28:22.657212    6370 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0912 15:28:22.657250    6370 cache.go:87] Successfully saved all images to host disk.
	I0912 15:28:22.657436    6370 start.go:360] acquireMachinesLock for no-preload-558000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:28:22.657471    6370 start.go:364] duration metric: took 28.25µs to acquireMachinesLock for "no-preload-558000"
	I0912 15:28:22.657483    6370 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:28:22.657488    6370 fix.go:54] fixHost starting: 
	I0912 15:28:22.657612    6370 fix.go:112] recreateIfNeeded on no-preload-558000: state=Stopped err=<nil>
	W0912 15:28:22.657624    6370 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 15:28:22.665926    6370 out.go:177] * Restarting existing qemu2 VM for "no-preload-558000" ...
	I0912 15:28:22.669734    6370 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:28:22.669774    6370 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/no-preload-558000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/no-preload-558000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/no-preload-558000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:35:27:4e:fc:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/no-preload-558000/disk.qcow2
	I0912 15:28:22.671996    6370 main.go:141] libmachine: STDOUT: 
	I0912 15:28:22.672025    6370 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:28:22.672061    6370 fix.go:56] duration metric: took 14.57375ms for fixHost
	I0912 15:28:22.672066    6370 start.go:83] releasing machines lock for "no-preload-558000", held for 14.590583ms
	W0912 15:28:22.672074    6370 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:28:22.672118    6370 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:28:22.672123    6370 start.go:729] Will try again in 5 seconds ...
	I0912 15:28:27.674073    6370 start.go:360] acquireMachinesLock for no-preload-558000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:28:29.789447    6370 start.go:364] duration metric: took 2.115321583s to acquireMachinesLock for "no-preload-558000"
	I0912 15:28:29.789600    6370 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:28:29.789617    6370 fix.go:54] fixHost starting: 
	I0912 15:28:29.790315    6370 fix.go:112] recreateIfNeeded on no-preload-558000: state=Stopped err=<nil>
	W0912 15:28:29.790345    6370 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 15:28:29.804172    6370 out.go:177] * Restarting existing qemu2 VM for "no-preload-558000" ...
	I0912 15:28:29.808187    6370 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:28:29.808604    6370 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/no-preload-558000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/no-preload-558000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/no-preload-558000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:35:27:4e:fc:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/no-preload-558000/disk.qcow2
	I0912 15:28:29.818223    6370 main.go:141] libmachine: STDOUT: 
	I0912 15:28:29.818312    6370 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:28:29.818400    6370 fix.go:56] duration metric: took 28.781959ms for fixHost
	I0912 15:28:29.818426    6370 start.go:83] releasing machines lock for "no-preload-558000", held for 28.93275ms
	W0912 15:28:29.818654    6370 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-558000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-558000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:28:29.833169    6370 out.go:201] 
	W0912 15:28:29.837113    6370 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:28:29.837143    6370 out.go:270] * 
	* 
	W0912 15:28:29.839872    6370 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:28:29.856169    6370 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-558000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000: exit status 7 (55.918792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-558000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (7.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-818000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-818000 create -f testdata/busybox.yaml: exit status 1 (31.592083ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-818000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-818000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-818000 -n embed-certs-818000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-818000 -n embed-certs-818000: exit status 7 (30.099417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-818000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-818000 -n embed-certs-818000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-818000 -n embed-certs-818000: exit status 7 (32.98925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-818000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-558000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000: exit status 7 (33.474916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-558000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-558000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-558000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-558000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.092667ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-558000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-558000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000: exit status 7 (30.619958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-558000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-818000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-818000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-818000 describe deploy/metrics-server -n kube-system: exit status 1 (29.44ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-818000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-818000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-818000 -n embed-certs-818000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-818000 -n embed-certs-818000: exit status 7 (35.765625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-818000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-558000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000: exit status 7 (31.108542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-558000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-558000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-558000 --alsologtostderr -v=1: exit status 83 (51.25725ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-558000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-558000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:28:30.127768    6406 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:28:30.127905    6406 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:28:30.127909    6406 out.go:358] Setting ErrFile to fd 2...
	I0912 15:28:30.127911    6406 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:28:30.128035    6406 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:28:30.128287    6406 out.go:352] Setting JSON to false
	I0912 15:28:30.128295    6406 mustload.go:65] Loading cluster: no-preload-558000
	I0912 15:28:30.128470    6406 config.go:182] Loaded profile config "no-preload-558000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:28:30.134730    6406 out.go:177] * The control-plane node no-preload-558000 host is not running: state=Stopped
	I0912 15:28:30.142643    6406 out.go:177]   To start a cluster, run: "minikube start -p no-preload-558000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-558000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000: exit status 7 (30.602792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-558000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000: exit status 7 (27.529583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-558000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-572000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-572000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.842545542s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-572000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-572000" primary control-plane node in "default-k8s-diff-port-572000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-572000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:28:30.554193    6438 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:28:30.554455    6438 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:28:30.554462    6438 out.go:358] Setting ErrFile to fd 2...
	I0912 15:28:30.554464    6438 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:28:30.554617    6438 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:28:30.555918    6438 out.go:352] Setting JSON to false
	I0912 15:28:30.572109    6438 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5274,"bootTime":1726174836,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:28:30.572185    6438 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:28:30.575633    6438 out.go:177] * [default-k8s-diff-port-572000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:28:30.583637    6438 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:28:30.583678    6438 notify.go:220] Checking for updates...
	I0912 15:28:30.590586    6438 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:28:30.593674    6438 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:28:30.596648    6438 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:28:30.599621    6438 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:28:30.602650    6438 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:28:30.605925    6438 config.go:182] Loaded profile config "embed-certs-818000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:28:30.605989    6438 config.go:182] Loaded profile config "multinode-323000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:28:30.606041    6438 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:28:30.609588    6438 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:28:30.615596    6438 start.go:297] selected driver: qemu2
	I0912 15:28:30.615602    6438 start.go:901] validating driver "qemu2" against <nil>
	I0912 15:28:30.615608    6438 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:28:30.617954    6438 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 15:28:30.620653    6438 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:28:30.623711    6438 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:28:30.623743    6438 cni.go:84] Creating CNI manager for ""
	I0912 15:28:30.623751    6438 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:28:30.623755    6438 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 15:28:30.623789    6438 start.go:340] cluster config:
	{Name:default-k8s-diff-port-572000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-572000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:28:30.627630    6438 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:28:30.635624    6438 out.go:177] * Starting "default-k8s-diff-port-572000" primary control-plane node in "default-k8s-diff-port-572000" cluster
	I0912 15:28:30.639699    6438 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 15:28:30.639717    6438 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 15:28:30.639728    6438 cache.go:56] Caching tarball of preloaded images
	I0912 15:28:30.639801    6438 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:28:30.639808    6438 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 15:28:30.639876    6438 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/default-k8s-diff-port-572000/config.json ...
	I0912 15:28:30.639889    6438 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/default-k8s-diff-port-572000/config.json: {Name:mk2997e66309127ddc08de393856b7879421918c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:28:30.640327    6438 start.go:360] acquireMachinesLock for default-k8s-diff-port-572000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:28:30.640369    6438 start.go:364] duration metric: took 33.375µs to acquireMachinesLock for "default-k8s-diff-port-572000"
	I0912 15:28:30.640383    6438 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-572000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:28:30.640416    6438 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:28:30.647655    6438 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 15:28:30.666028    6438 start.go:159] libmachine.API.Create for "default-k8s-diff-port-572000" (driver="qemu2")
	I0912 15:28:30.666057    6438 client.go:168] LocalClient.Create starting
	I0912 15:28:30.666132    6438 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:28:30.666166    6438 main.go:141] libmachine: Decoding PEM data...
	I0912 15:28:30.666176    6438 main.go:141] libmachine: Parsing certificate...
	I0912 15:28:30.666211    6438 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:28:30.666242    6438 main.go:141] libmachine: Decoding PEM data...
	I0912 15:28:30.666248    6438 main.go:141] libmachine: Parsing certificate...
	I0912 15:28:30.666717    6438 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:28:30.825464    6438 main.go:141] libmachine: Creating SSH key...
	I0912 15:28:30.933136    6438 main.go:141] libmachine: Creating Disk image...
	I0912 15:28:30.933142    6438 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:28:30.933331    6438 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/default-k8s-diff-port-572000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/default-k8s-diff-port-572000/disk.qcow2
	I0912 15:28:30.942260    6438 main.go:141] libmachine: STDOUT: 
	I0912 15:28:30.942281    6438 main.go:141] libmachine: STDERR: 
	I0912 15:28:30.942342    6438 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/default-k8s-diff-port-572000/disk.qcow2 +20000M
	I0912 15:28:30.950179    6438 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:28:30.950192    6438 main.go:141] libmachine: STDERR: 
	I0912 15:28:30.950208    6438 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/default-k8s-diff-port-572000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/default-k8s-diff-port-572000/disk.qcow2
	I0912 15:28:30.950213    6438 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:28:30.950226    6438 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:28:30.950267    6438 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/default-k8s-diff-port-572000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/default-k8s-diff-port-572000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/default-k8s-diff-port-572000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:03:5c:06:7f:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/default-k8s-diff-port-572000/disk.qcow2
	I0912 15:28:30.951870    6438 main.go:141] libmachine: STDOUT: 
	I0912 15:28:30.951885    6438 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:28:30.951903    6438 client.go:171] duration metric: took 285.847333ms to LocalClient.Create
	I0912 15:28:32.954074    6438 start.go:128] duration metric: took 2.313689583s to createHost
	I0912 15:28:32.954140    6438 start.go:83] releasing machines lock for "default-k8s-diff-port-572000", held for 2.313812375s
	W0912 15:28:32.954273    6438 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:28:32.971353    6438 out.go:177] * Deleting "default-k8s-diff-port-572000" in qemu2 ...
	W0912 15:28:33.002797    6438 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:28:33.002831    6438 start.go:729] Will try again in 5 seconds ...
	I0912 15:28:38.005004    6438 start.go:360] acquireMachinesLock for default-k8s-diff-port-572000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:28:38.005434    6438 start.go:364] duration metric: took 255.25µs to acquireMachinesLock for "default-k8s-diff-port-572000"
	I0912 15:28:38.005561    6438 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-572000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:28:38.005910    6438 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:28:38.015428    6438 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 15:28:38.066475    6438 start.go:159] libmachine.API.Create for "default-k8s-diff-port-572000" (driver="qemu2")
	I0912 15:28:38.066528    6438 client.go:168] LocalClient.Create starting
	I0912 15:28:38.066635    6438 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:28:38.066697    6438 main.go:141] libmachine: Decoding PEM data...
	I0912 15:28:38.066717    6438 main.go:141] libmachine: Parsing certificate...
	I0912 15:28:38.066772    6438 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:28:38.066815    6438 main.go:141] libmachine: Decoding PEM data...
	I0912 15:28:38.066826    6438 main.go:141] libmachine: Parsing certificate...
	I0912 15:28:38.067389    6438 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:28:38.235820    6438 main.go:141] libmachine: Creating SSH key...
	I0912 15:28:38.292627    6438 main.go:141] libmachine: Creating Disk image...
	I0912 15:28:38.292632    6438 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:28:38.292853    6438 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/default-k8s-diff-port-572000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/default-k8s-diff-port-572000/disk.qcow2
	I0912 15:28:38.302256    6438 main.go:141] libmachine: STDOUT: 
	I0912 15:28:38.302314    6438 main.go:141] libmachine: STDERR: 
	I0912 15:28:38.302368    6438 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/default-k8s-diff-port-572000/disk.qcow2 +20000M
	I0912 15:28:38.310257    6438 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:28:38.310273    6438 main.go:141] libmachine: STDERR: 
	I0912 15:28:38.310285    6438 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/default-k8s-diff-port-572000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/default-k8s-diff-port-572000/disk.qcow2
	I0912 15:28:38.310289    6438 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:28:38.310296    6438 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:28:38.310333    6438 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/default-k8s-diff-port-572000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/default-k8s-diff-port-572000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/default-k8s-diff-port-572000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:66:20:00:ab:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/default-k8s-diff-port-572000/disk.qcow2
	I0912 15:28:38.311959    6438 main.go:141] libmachine: STDOUT: 
	I0912 15:28:38.311973    6438 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:28:38.311984    6438 client.go:171] duration metric: took 245.45725ms to LocalClient.Create
	I0912 15:28:40.314124    6438 start.go:128] duration metric: took 2.308220959s to createHost
	I0912 15:28:40.314178    6438 start.go:83] releasing machines lock for "default-k8s-diff-port-572000", held for 2.30876875s
	W0912 15:28:40.314527    6438 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-572000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-572000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:28:40.332089    6438 out.go:201] 
	W0912 15:28:40.339148    6438 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:28:40.339173    6438 out.go:270] * 
	* 
	W0912 15:28:40.342064    6438 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:28:40.353078    6438 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-572000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-572000 -n default-k8s-diff-port-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-572000 -n default-k8s-diff-port-572000: exit status 7 (60.811458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-572000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-818000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-818000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (6.223217208s)

                                                
                                                
-- stdout --
	* [embed-certs-818000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-818000" primary control-plane node in "embed-certs-818000" cluster
	* Restarting existing qemu2 VM for "embed-certs-818000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-818000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:28:34.190655    6466 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:28:34.190781    6466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:28:34.190784    6466 out.go:358] Setting ErrFile to fd 2...
	I0912 15:28:34.190786    6466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:28:34.190917    6466 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:28:34.191889    6466 out.go:352] Setting JSON to false
	I0912 15:28:34.207757    6466 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5278,"bootTime":1726174836,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:28:34.207837    6466 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:28:34.212463    6466 out.go:177] * [embed-certs-818000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:28:34.219400    6466 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:28:34.219430    6466 notify.go:220] Checking for updates...
	I0912 15:28:34.224840    6466 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:28:34.228332    6466 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:28:34.231387    6466 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:28:34.234367    6466 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:28:34.237591    6466 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:28:34.240585    6466 config.go:182] Loaded profile config "embed-certs-818000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:28:34.240856    6466 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:28:34.245393    6466 out.go:177] * Using the qemu2 driver based on existing profile
	I0912 15:28:34.252305    6466 start.go:297] selected driver: qemu2
	I0912 15:28:34.252309    6466 start.go:901] validating driver "qemu2" against &{Name:embed-certs-818000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:embed-certs-818000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:28:34.252363    6466 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:28:34.254823    6466 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:28:34.254851    6466 cni.go:84] Creating CNI manager for ""
	I0912 15:28:34.254864    6466 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:28:34.254888    6466 start.go:340] cluster config:
	{Name:embed-certs-818000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-818000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:28:34.258416    6466 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:28:34.266389    6466 out.go:177] * Starting "embed-certs-818000" primary control-plane node in "embed-certs-818000" cluster
	I0912 15:28:34.270374    6466 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 15:28:34.270391    6466 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 15:28:34.270401    6466 cache.go:56] Caching tarball of preloaded images
	I0912 15:28:34.270463    6466 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:28:34.270470    6466 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 15:28:34.270531    6466 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/embed-certs-818000/config.json ...
	I0912 15:28:34.271071    6466 start.go:360] acquireMachinesLock for embed-certs-818000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:28:34.271104    6466 start.go:364] duration metric: took 27.583µs to acquireMachinesLock for "embed-certs-818000"
	I0912 15:28:34.271116    6466 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:28:34.271122    6466 fix.go:54] fixHost starting: 
	I0912 15:28:34.271246    6466 fix.go:112] recreateIfNeeded on embed-certs-818000: state=Stopped err=<nil>
	W0912 15:28:34.271255    6466 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 15:28:34.275350    6466 out.go:177] * Restarting existing qemu2 VM for "embed-certs-818000" ...
	I0912 15:28:34.282294    6466 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:28:34.282352    6466 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/embed-certs-818000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/embed-certs-818000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/embed-certs-818000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:54:9c:a8:47:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/embed-certs-818000/disk.qcow2
	I0912 15:28:34.284612    6466 main.go:141] libmachine: STDOUT: 
	I0912 15:28:34.284632    6466 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:28:34.284663    6466 fix.go:56] duration metric: took 13.541375ms for fixHost
	I0912 15:28:34.284667    6466 start.go:83] releasing machines lock for "embed-certs-818000", held for 13.557625ms
	W0912 15:28:34.284673    6466 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:28:34.284700    6466 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:28:34.284705    6466 start.go:729] Will try again in 5 seconds ...
	I0912 15:28:39.286764    6466 start.go:360] acquireMachinesLock for embed-certs-818000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:28:40.314349    6466 start.go:364] duration metric: took 1.027502333s to acquireMachinesLock for "embed-certs-818000"
	I0912 15:28:40.314537    6466 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:28:40.314557    6466 fix.go:54] fixHost starting: 
	I0912 15:28:40.315267    6466 fix.go:112] recreateIfNeeded on embed-certs-818000: state=Stopped err=<nil>
	W0912 15:28:40.315294    6466 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 15:28:40.336085    6466 out.go:177] * Restarting existing qemu2 VM for "embed-certs-818000" ...
	I0912 15:28:40.342043    6466 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:28:40.342357    6466 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/embed-certs-818000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/embed-certs-818000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/embed-certs-818000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:54:9c:a8:47:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/embed-certs-818000/disk.qcow2
	I0912 15:28:40.351712    6466 main.go:141] libmachine: STDOUT: 
	I0912 15:28:40.351776    6466 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:28:40.351839    6466 fix.go:56] duration metric: took 37.285958ms for fixHost
	I0912 15:28:40.351855    6466 start.go:83] releasing machines lock for "embed-certs-818000", held for 37.467583ms
	W0912 15:28:40.352061    6466 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-818000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-818000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:28:40.362555    6466 out.go:201] 
	W0912 15:28:40.367176    6466 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:28:40.367221    6466 out.go:270] * 
	* 
	W0912 15:28:40.369260    6466 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:28:40.377138    6466 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-818000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-818000 -n embed-certs-818000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-818000 -n embed-certs-818000: exit status 7 (49.830708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-818000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-572000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-572000 create -f testdata/busybox.yaml: exit status 1 (31.433791ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-572000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-572000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-572000 -n default-k8s-diff-port-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-572000 -n default-k8s-diff-port-572000: exit status 7 (29.471542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-572000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-572000 -n default-k8s-diff-port-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-572000 -n default-k8s-diff-port-572000: exit status 7 (34.599125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-572000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-818000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-818000 -n embed-certs-818000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-818000 -n embed-certs-818000: exit status 7 (33.419084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-818000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-818000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-818000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-818000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.673ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-818000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-818000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-818000 -n embed-certs-818000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-818000 -n embed-certs-818000: exit status 7 (31.376833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-818000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-572000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-572000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-572000 describe deploy/metrics-server -n kube-system: exit status 1 (29.3095ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-572000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-572000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-572000 -n default-k8s-diff-port-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-572000 -n default-k8s-diff-port-572000: exit status 7 (38.77425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-572000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-818000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-818000 -n embed-certs-818000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-818000 -n embed-certs-818000: exit status 7 (30.499459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-818000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-818000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-818000 --alsologtostderr -v=1: exit status 83 (47.0635ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-818000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-818000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:28:40.642928    6499 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:28:40.643120    6499 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:28:40.643128    6499 out.go:358] Setting ErrFile to fd 2...
	I0912 15:28:40.643130    6499 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:28:40.643250    6499 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:28:40.643460    6499 out.go:352] Setting JSON to false
	I0912 15:28:40.643471    6499 mustload.go:65] Loading cluster: embed-certs-818000
	I0912 15:28:40.643682    6499 config.go:182] Loaded profile config "embed-certs-818000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:28:40.647151    6499 out.go:177] * The control-plane node embed-certs-818000 host is not running: state=Stopped
	I0912 15:28:40.655000    6499 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-818000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-818000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-818000 -n embed-certs-818000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-818000 -n embed-certs-818000: exit status 7 (37.839667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-818000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-818000 -n embed-certs-818000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-818000 -n embed-certs-818000: exit status 7 (28.055958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-818000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-807000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-807000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.843900375s)

                                                
                                                
-- stdout --
	* [newest-cni-807000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-807000" primary control-plane node in "newest-cni-807000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-807000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:28:40.962876    6523 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:28:40.963007    6523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:28:40.963014    6523 out.go:358] Setting ErrFile to fd 2...
	I0912 15:28:40.963017    6523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:28:40.963163    6523 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:28:40.964268    6523 out.go:352] Setting JSON to false
	I0912 15:28:40.981511    6523 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5284,"bootTime":1726174836,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:28:40.981588    6523 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:28:40.986924    6523 out.go:177] * [newest-cni-807000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:28:40.994040    6523 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:28:40.994077    6523 notify.go:220] Checking for updates...
	I0912 15:28:41.000006    6523 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:28:41.002982    6523 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:28:41.004410    6523 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:28:41.008012    6523 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:28:41.011044    6523 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:28:41.014399    6523 config.go:182] Loaded profile config "default-k8s-diff-port-572000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:28:41.014470    6523 config.go:182] Loaded profile config "multinode-323000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:28:41.014530    6523 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:28:41.018963    6523 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:28:41.026022    6523 start.go:297] selected driver: qemu2
	I0912 15:28:41.026029    6523 start.go:901] validating driver "qemu2" against <nil>
	I0912 15:28:41.026040    6523 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:28:41.028336    6523 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0912 15:28:41.028358    6523 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0912 15:28:41.047528    6523 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:28:41.051211    6523 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0912 15:28:41.051246    6523 cni.go:84] Creating CNI manager for ""
	I0912 15:28:41.051254    6523 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:28:41.051258    6523 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 15:28:41.051298    6523 start.go:340] cluster config:
	{Name:newest-cni-807000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-807000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:28:41.055410    6523 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:28:41.063021    6523 out.go:177] * Starting "newest-cni-807000" primary control-plane node in "newest-cni-807000" cluster
	I0912 15:28:41.067007    6523 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 15:28:41.067023    6523 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 15:28:41.067033    6523 cache.go:56] Caching tarball of preloaded images
	I0912 15:28:41.067113    6523 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:28:41.067119    6523 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 15:28:41.067195    6523 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/newest-cni-807000/config.json ...
	I0912 15:28:41.067207    6523 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/newest-cni-807000/config.json: {Name:mk49dcd10310d9d648622256a34ea031dbcb06b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:28:41.067629    6523 start.go:360] acquireMachinesLock for newest-cni-807000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:28:41.067665    6523 start.go:364] duration metric: took 29.5µs to acquireMachinesLock for "newest-cni-807000"
	I0912 15:28:41.067678    6523 start.go:93] Provisioning new machine with config: &{Name:newest-cni-807000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-807000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:28:41.067710    6523 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:28:41.076987    6523 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 15:28:41.096223    6523 start.go:159] libmachine.API.Create for "newest-cni-807000" (driver="qemu2")
	I0912 15:28:41.096249    6523 client.go:168] LocalClient.Create starting
	I0912 15:28:41.096328    6523 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:28:41.096359    6523 main.go:141] libmachine: Decoding PEM data...
	I0912 15:28:41.096369    6523 main.go:141] libmachine: Parsing certificate...
	I0912 15:28:41.096407    6523 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:28:41.096435    6523 main.go:141] libmachine: Decoding PEM data...
	I0912 15:28:41.096443    6523 main.go:141] libmachine: Parsing certificate...
	I0912 15:28:41.096863    6523 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:28:41.257678    6523 main.go:141] libmachine: Creating SSH key...
	I0912 15:28:41.308367    6523 main.go:141] libmachine: Creating Disk image...
	I0912 15:28:41.308373    6523 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:28:41.308631    6523 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/newest-cni-807000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/newest-cni-807000/disk.qcow2
	I0912 15:28:41.317455    6523 main.go:141] libmachine: STDOUT: 
	I0912 15:28:41.317473    6523 main.go:141] libmachine: STDERR: 
	I0912 15:28:41.317522    6523 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/newest-cni-807000/disk.qcow2 +20000M
	I0912 15:28:41.325292    6523 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:28:41.325308    6523 main.go:141] libmachine: STDERR: 
	I0912 15:28:41.325324    6523 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/newest-cni-807000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/newest-cni-807000/disk.qcow2
	I0912 15:28:41.325330    6523 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:28:41.325342    6523 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:28:41.325373    6523 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/newest-cni-807000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/newest-cni-807000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/newest-cni-807000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:50:ef:26:57:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/newest-cni-807000/disk.qcow2
	I0912 15:28:41.327013    6523 main.go:141] libmachine: STDOUT: 
	I0912 15:28:41.327029    6523 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:28:41.327049    6523 client.go:171] duration metric: took 230.796417ms to LocalClient.Create
	I0912 15:28:43.329201    6523 start.go:128] duration metric: took 2.261518833s to createHost
	I0912 15:28:43.329260    6523 start.go:83] releasing machines lock for "newest-cni-807000", held for 2.261637292s
	W0912 15:28:43.329312    6523 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:28:43.344855    6523 out.go:177] * Deleting "newest-cni-807000" in qemu2 ...
	W0912 15:28:43.379505    6523 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:28:43.379541    6523 start.go:729] Will try again in 5 seconds ...
	I0912 15:28:48.381648    6523 start.go:360] acquireMachinesLock for newest-cni-807000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:28:48.386255    6523 start.go:364] duration metric: took 4.521958ms to acquireMachinesLock for "newest-cni-807000"
	I0912 15:28:48.386319    6523 start.go:93] Provisioning new machine with config: &{Name:newest-cni-807000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-807000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:28:48.386637    6523 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:28:48.395051    6523 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 15:28:48.443496    6523 start.go:159] libmachine.API.Create for "newest-cni-807000" (driver="qemu2")
	I0912 15:28:48.443551    6523 client.go:168] LocalClient.Create starting
	I0912 15:28:48.443683    6523 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/ca.pem
	I0912 15:28:48.443753    6523 main.go:141] libmachine: Decoding PEM data...
	I0912 15:28:48.443769    6523 main.go:141] libmachine: Parsing certificate...
	I0912 15:28:48.443833    6523 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19616-1259/.minikube/certs/cert.pem
	I0912 15:28:48.443878    6523 main.go:141] libmachine: Decoding PEM data...
	I0912 15:28:48.443894    6523 main.go:141] libmachine: Parsing certificate...
	I0912 15:28:48.444421    6523 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0912 15:28:48.615978    6523 main.go:141] libmachine: Creating SSH key...
	I0912 15:28:48.721803    6523 main.go:141] libmachine: Creating Disk image...
	I0912 15:28:48.721812    6523 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:28:48.722037    6523 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/newest-cni-807000/disk.qcow2.raw /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/newest-cni-807000/disk.qcow2
	I0912 15:28:48.732050    6523 main.go:141] libmachine: STDOUT: 
	I0912 15:28:48.732077    6523 main.go:141] libmachine: STDERR: 
	I0912 15:28:48.732129    6523 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/newest-cni-807000/disk.qcow2 +20000M
	I0912 15:28:48.741636    6523 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:28:48.741654    6523 main.go:141] libmachine: STDERR: 
	I0912 15:28:48.741673    6523 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/newest-cni-807000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/newest-cni-807000/disk.qcow2
	I0912 15:28:48.741676    6523 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:28:48.741686    6523 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:28:48.741718    6523 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/newest-cni-807000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/newest-cni-807000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/newest-cni-807000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:ad:d9:86:80:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/newest-cni-807000/disk.qcow2
	I0912 15:28:48.744285    6523 main.go:141] libmachine: STDOUT: 
	I0912 15:28:48.744304    6523 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:28:48.744316    6523 client.go:171] duration metric: took 300.767375ms to LocalClient.Create
	I0912 15:28:50.746475    6523 start.go:128] duration metric: took 2.359845167s to createHost
	I0912 15:28:50.746524    6523 start.go:83] releasing machines lock for "newest-cni-807000", held for 2.360287333s
	W0912 15:28:50.746768    6523 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-807000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-807000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:28:50.755243    6523 out.go:201] 
	W0912 15:28:50.758403    6523 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:28:50.758435    6523 out.go:270] * 
	* 
	W0912 15:28:50.760999    6523 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:28:50.773284    6523 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-807000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-807000 -n newest-cni-807000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-807000 -n newest-cni-807000: exit status 7 (68.852917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-807000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-572000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-572000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.755047334s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-572000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-572000" primary control-plane node in "default-k8s-diff-port-572000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-572000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-572000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:28:42.699109    6547 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:28:42.699248    6547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:28:42.699252    6547 out.go:358] Setting ErrFile to fd 2...
	I0912 15:28:42.699254    6547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:28:42.699370    6547 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:28:42.700349    6547 out.go:352] Setting JSON to false
	I0912 15:28:42.716270    6547 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5286,"bootTime":1726174836,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:28:42.716355    6547 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:28:42.721305    6547 out.go:177] * [default-k8s-diff-port-572000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:28:42.728116    6547 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:28:42.728170    6547 notify.go:220] Checking for updates...
	I0912 15:28:42.735227    6547 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:28:42.736660    6547 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:28:42.739317    6547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:28:42.742236    6547 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:28:42.745303    6547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:28:42.748534    6547 config.go:182] Loaded profile config "default-k8s-diff-port-572000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:28:42.748790    6547 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:28:42.753221    6547 out.go:177] * Using the qemu2 driver based on existing profile
	I0912 15:28:42.760249    6547 start.go:297] selected driver: qemu2
	I0912 15:28:42.760257    6547 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-572000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:28:42.760332    6547 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:28:42.762559    6547 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:28:42.762590    6547 cni.go:84] Creating CNI manager for ""
	I0912 15:28:42.762597    6547 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:28:42.762632    6547 start.go:340] cluster config:
	{Name:default-k8s-diff-port-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-572000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:28:42.766055    6547 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:28:42.773186    6547 out.go:177] * Starting "default-k8s-diff-port-572000" primary control-plane node in "default-k8s-diff-port-572000" cluster
	I0912 15:28:42.777271    6547 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 15:28:42.777287    6547 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 15:28:42.777296    6547 cache.go:56] Caching tarball of preloaded images
	I0912 15:28:42.777349    6547 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:28:42.777354    6547 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 15:28:42.777417    6547 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/default-k8s-diff-port-572000/config.json ...
	I0912 15:28:42.777987    6547 start.go:360] acquireMachinesLock for default-k8s-diff-port-572000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:28:43.329360    6547 start.go:364] duration metric: took 551.364166ms to acquireMachinesLock for "default-k8s-diff-port-572000"
	I0912 15:28:43.329579    6547 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:28:43.329606    6547 fix.go:54] fixHost starting: 
	I0912 15:28:43.330333    6547 fix.go:112] recreateIfNeeded on default-k8s-diff-port-572000: state=Stopped err=<nil>
	W0912 15:28:43.330384    6547 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 15:28:43.335882    6547 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-572000" ...
	I0912 15:28:43.348912    6547 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:28:43.349088    6547 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/default-k8s-diff-port-572000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/default-k8s-diff-port-572000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/default-k8s-diff-port-572000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:66:20:00:ab:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/default-k8s-diff-port-572000/disk.qcow2
	I0912 15:28:43.359215    6547 main.go:141] libmachine: STDOUT: 
	I0912 15:28:43.359394    6547 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:28:43.359549    6547 fix.go:56] duration metric: took 29.940958ms for fixHost
	I0912 15:28:43.359571    6547 start.go:83] releasing machines lock for "default-k8s-diff-port-572000", held for 30.183833ms
	W0912 15:28:43.359610    6547 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:28:43.359842    6547 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:28:43.359864    6547 start.go:729] Will try again in 5 seconds ...
	I0912 15:28:48.361824    6547 start.go:360] acquireMachinesLock for default-k8s-diff-port-572000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:28:48.362395    6547 start.go:364] duration metric: took 408.167µs to acquireMachinesLock for "default-k8s-diff-port-572000"
	I0912 15:28:48.362561    6547 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:28:48.362585    6547 fix.go:54] fixHost starting: 
	I0912 15:28:48.363387    6547 fix.go:112] recreateIfNeeded on default-k8s-diff-port-572000: state=Stopped err=<nil>
	W0912 15:28:48.363417    6547 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 15:28:48.373089    6547 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-572000" ...
	I0912 15:28:48.375943    6547 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:28:48.376185    6547 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/default-k8s-diff-port-572000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/default-k8s-diff-port-572000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/default-k8s-diff-port-572000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:66:20:00:ab:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/default-k8s-diff-port-572000/disk.qcow2
	I0912 15:28:48.385988    6547 main.go:141] libmachine: STDOUT: 
	I0912 15:28:48.386060    6547 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:28:48.386154    6547 fix.go:56] duration metric: took 23.56975ms for fixHost
	I0912 15:28:48.386176    6547 start.go:83] releasing machines lock for "default-k8s-diff-port-572000", held for 23.719125ms
	W0912 15:28:48.386363    6547 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-572000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-572000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:28:48.401967    6547 out.go:201] 
	W0912 15:28:48.405941    6547 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:28:48.405983    6547 out.go:270] * 
	* 
	W0912 15:28:48.408117    6547 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:28:48.416913    6547 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-572000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-572000 -n default-k8s-diff-port-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-572000 -n default-k8s-diff-port-572000: exit status 7 (48.067791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-572000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-572000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-572000 -n default-k8s-diff-port-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-572000 -n default-k8s-diff-port-572000: exit status 7 (33.12375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-572000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-572000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-572000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-572000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.530041ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-572000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-572000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-572000 -n default-k8s-diff-port-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-572000 -n default-k8s-diff-port-572000: exit status 7 (33.346292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-572000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-572000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-572000 -n default-k8s-diff-port-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-572000 -n default-k8s-diff-port-572000: exit status 7 (29.318583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-572000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-572000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-572000 --alsologtostderr -v=1: exit status 83 (39.938208ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-572000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-572000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:28:48.681480    6567 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:28:48.681636    6567 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:28:48.681642    6567 out.go:358] Setting ErrFile to fd 2...
	I0912 15:28:48.681644    6567 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:28:48.681764    6567 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:28:48.681988    6567 out.go:352] Setting JSON to false
	I0912 15:28:48.681998    6567 mustload.go:65] Loading cluster: default-k8s-diff-port-572000
	I0912 15:28:48.682182    6567 config.go:182] Loaded profile config "default-k8s-diff-port-572000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:28:48.684941    6567 out.go:177] * The control-plane node default-k8s-diff-port-572000 host is not running: state=Stopped
	I0912 15:28:48.688935    6567 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-572000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-572000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-572000 -n default-k8s-diff-port-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-572000 -n default-k8s-diff-port-572000: exit status 7 (29.720917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-572000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-572000 -n default-k8s-diff-port-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-572000 -n default-k8s-diff-port-572000: exit status 7 (28.24625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-572000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-807000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-807000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.183994041s)

                                                
                                                
-- stdout --
	* [newest-cni-807000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-807000" primary control-plane node in "newest-cni-807000" cluster
	* Restarting existing qemu2 VM for "newest-cni-807000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-807000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:28:53.078226    6610 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:28:53.078373    6610 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:28:53.078377    6610 out.go:358] Setting ErrFile to fd 2...
	I0912 15:28:53.078379    6610 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:28:53.078510    6610 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:28:53.079530    6610 out.go:352] Setting JSON to false
	I0912 15:28:53.095585    6610 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5297,"bootTime":1726174836,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:28:53.095668    6610 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 15:28:53.100479    6610 out.go:177] * [newest-cni-807000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 15:28:53.107459    6610 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 15:28:53.107516    6610 notify.go:220] Checking for updates...
	I0912 15:28:53.114465    6610 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 15:28:53.117427    6610 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:28:53.120439    6610 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:28:53.123540    6610 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 15:28:53.126479    6610 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:28:53.129654    6610 config.go:182] Loaded profile config "newest-cni-807000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:28:53.129917    6610 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 15:28:53.134497    6610 out.go:177] * Using the qemu2 driver based on existing profile
	I0912 15:28:53.141388    6610 start.go:297] selected driver: qemu2
	I0912 15:28:53.141394    6610 start.go:901] validating driver "qemu2" against &{Name:newest-cni-807000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:newest-cni-807000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:28:53.141457    6610 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:28:53.143961    6610 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0912 15:28:53.144008    6610 cni.go:84] Creating CNI manager for ""
	I0912 15:28:53.144015    6610 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:28:53.144033    6610 start.go:340] cluster config:
	{Name:newest-cni-807000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-807000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 15:28:53.147687    6610 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:28:53.155434    6610 out.go:177] * Starting "newest-cni-807000" primary control-plane node in "newest-cni-807000" cluster
	I0912 15:28:53.159520    6610 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 15:28:53.159536    6610 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 15:28:53.159551    6610 cache.go:56] Caching tarball of preloaded images
	I0912 15:28:53.159623    6610 preload.go:172] Found /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:28:53.159628    6610 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 15:28:53.159689    6610 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/newest-cni-807000/config.json ...
	I0912 15:28:53.160208    6610 start.go:360] acquireMachinesLock for newest-cni-807000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:28:53.160244    6610 start.go:364] duration metric: took 30.208µs to acquireMachinesLock for "newest-cni-807000"
	I0912 15:28:53.160255    6610 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:28:53.160260    6610 fix.go:54] fixHost starting: 
	I0912 15:28:53.160384    6610 fix.go:112] recreateIfNeeded on newest-cni-807000: state=Stopped err=<nil>
	W0912 15:28:53.160392    6610 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 15:28:53.164462    6610 out.go:177] * Restarting existing qemu2 VM for "newest-cni-807000" ...
	I0912 15:28:53.171441    6610 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:28:53.171482    6610 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/newest-cni-807000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/newest-cni-807000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/newest-cni-807000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:ad:d9:86:80:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/newest-cni-807000/disk.qcow2
	I0912 15:28:53.173456    6610 main.go:141] libmachine: STDOUT: 
	I0912 15:28:53.173478    6610 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:28:53.173506    6610 fix.go:56] duration metric: took 13.245666ms for fixHost
	I0912 15:28:53.173511    6610 start.go:83] releasing machines lock for "newest-cni-807000", held for 13.262875ms
	W0912 15:28:53.173516    6610 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:28:53.173544    6610 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:28:53.173549    6610 start.go:729] Will try again in 5 seconds ...
	I0912 15:28:58.175713    6610 start.go:360] acquireMachinesLock for newest-cni-807000: {Name:mk18104440c14407e7206cb5d92d4872e7d20daa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:28:58.176094    6610 start.go:364] duration metric: took 297.709µs to acquireMachinesLock for "newest-cni-807000"
	I0912 15:28:58.176234    6610 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:28:58.176254    6610 fix.go:54] fixHost starting: 
	I0912 15:28:58.176923    6610 fix.go:112] recreateIfNeeded on newest-cni-807000: state=Stopped err=<nil>
	W0912 15:28:58.176947    6610 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 15:28:58.185243    6610 out.go:177] * Restarting existing qemu2 VM for "newest-cni-807000" ...
	I0912 15:28:58.189247    6610 qemu.go:418] Using hvf for hardware acceleration
	I0912 15:28:58.189458    6610 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/newest-cni-807000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/newest-cni-807000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/newest-cni-807000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:ad:d9:86:80:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19616-1259/.minikube/machines/newest-cni-807000/disk.qcow2
	I0912 15:28:58.198336    6610 main.go:141] libmachine: STDOUT: 
	I0912 15:28:58.198398    6610 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:28:58.198465    6610 fix.go:56] duration metric: took 22.210583ms for fixHost
	I0912 15:28:58.198485    6610 start.go:83] releasing machines lock for "newest-cni-807000", held for 22.373334ms
	W0912 15:28:58.198649    6610 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-807000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-807000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:28:58.206216    6610 out.go:201] 
	W0912 15:28:58.210325    6610 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:28:58.210354    6610 out.go:270] * 
	* 
	W0912 15:28:58.213269    6610 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:28:58.220238    6610 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-807000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-807000 -n newest-cni-807000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-807000 -n newest-cni-807000: exit status 7 (68.24175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-807000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-807000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-807000 -n newest-cni-807000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-807000 -n newest-cni-807000: exit status 7 (29.455166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-807000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-807000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-807000 --alsologtostderr -v=1: exit status 83 (41.06475ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-807000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-807000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:28:58.401730    6624 out.go:345] Setting OutFile to fd 1 ...
	I0912 15:28:58.401877    6624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:28:58.401880    6624 out.go:358] Setting ErrFile to fd 2...
	I0912 15:28:58.401882    6624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 15:28:58.402234    6624 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 15:28:58.402514    6624 out.go:352] Setting JSON to false
	I0912 15:28:58.402529    6624 mustload.go:65] Loading cluster: newest-cni-807000
	I0912 15:28:58.402999    6624 config.go:182] Loaded profile config "newest-cni-807000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 15:28:58.407321    6624 out.go:177] * The control-plane node newest-cni-807000 host is not running: state=Stopped
	I0912 15:28:58.411321    6624 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-807000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-807000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-807000 -n newest-cni-807000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-807000 -n newest-cni-807000: exit status 7 (29.829542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-807000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-807000 -n newest-cni-807000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-807000 -n newest-cni-807000: exit status 7 (29.224583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-807000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (155/274)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.1/json-events 11.97
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.1
21 TestBinaryMirror 0.36
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 205.06
29 TestAddons/serial/Volcano 40.29
31 TestAddons/serial/GCPAuth/Namespaces 0.09
34 TestAddons/parallel/Ingress 18.19
35 TestAddons/parallel/InspektorGadget 10.29
36 TestAddons/parallel/MetricsServer 5.27
39 TestAddons/parallel/CSI 44.38
40 TestAddons/parallel/Headlamp 17.69
41 TestAddons/parallel/CloudSpanner 5.22
42 TestAddons/parallel/LocalPath 42.08
43 TestAddons/parallel/NvidiaDevicePlugin 6.2
44 TestAddons/parallel/Yakd 10.29
45 TestAddons/StoppedEnableDisable 12.42
53 TestHyperKitDriverInstallOrUpdate 10.37
56 TestErrorSpam/setup 34.49
57 TestErrorSpam/start 0.35
58 TestErrorSpam/status 0.24
59 TestErrorSpam/pause 0.68
60 TestErrorSpam/unpause 0.62
61 TestErrorSpam/stop 55.27
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 48.83
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 38.61
68 TestFunctional/serial/KubeContext 0.03
69 TestFunctional/serial/KubectlGetPods 0.04
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.72
73 TestFunctional/serial/CacheCmd/cache/add_local 1.63
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.03
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
77 TestFunctional/serial/CacheCmd/cache/cache_reload 0.63
78 TestFunctional/serial/CacheCmd/cache/delete 0.07
79 TestFunctional/serial/MinikubeKubectlCmd 0.83
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.01
81 TestFunctional/serial/ExtraConfig 37.53
82 TestFunctional/serial/ComponentHealth 0.05
83 TestFunctional/serial/LogsCmd 0.62
84 TestFunctional/serial/LogsFileCmd 0.61
85 TestFunctional/serial/InvalidService 3.96
87 TestFunctional/parallel/ConfigCmd 0.23
88 TestFunctional/parallel/DashboardCmd 7.09
89 TestFunctional/parallel/DryRun 0.22
90 TestFunctional/parallel/InternationalLanguage 0.11
91 TestFunctional/parallel/StatusCmd 0.24
96 TestFunctional/parallel/AddonsCmd 0.1
97 TestFunctional/parallel/PersistentVolumeClaim 29.1
99 TestFunctional/parallel/SSHCmd 0.12
100 TestFunctional/parallel/CpCmd 0.38
102 TestFunctional/parallel/FileSync 0.06
103 TestFunctional/parallel/CertSync 0.36
107 TestFunctional/parallel/NodeLabels 0.05
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.07
111 TestFunctional/parallel/License 0.23
112 TestFunctional/parallel/Version/short 0.06
113 TestFunctional/parallel/Version/components 0.2
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
118 TestFunctional/parallel/ImageCommands/ImageBuild 1.88
119 TestFunctional/parallel/ImageCommands/Setup 1.79
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.44
121 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.34
122 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.16
123 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.13
124 TestFunctional/parallel/ImageCommands/ImageRemove 0.14
125 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
126 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.16
127 TestFunctional/parallel/DockerEnv/bash 0.25
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.21
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.1
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
140 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
142 TestFunctional/parallel/MountCmd/any-port 5.24
143 TestFunctional/parallel/MountCmd/specific-port 1.02
144 TestFunctional/parallel/MountCmd/VerifyCleanup 0.85
145 TestFunctional/parallel/ServiceCmd/DeployApp 6.09
146 TestFunctional/parallel/ServiceCmd/List 0.3
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.28
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.12
149 TestFunctional/parallel/ServiceCmd/Format 0.1
150 TestFunctional/parallel/ServiceCmd/URL 0.1
151 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
152 TestFunctional/parallel/ProfileCmd/profile_list 0.11
153 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
154 TestFunctional/delete_echo-server_images 0.03
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 193.02
161 TestMultiControlPlane/serial/DeployApp 4.17
162 TestMultiControlPlane/serial/PingHostFromPods 0.72
163 TestMultiControlPlane/serial/AddWorkerNode 53.18
164 TestMultiControlPlane/serial/NodeLabels 0.15
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.23
166 TestMultiControlPlane/serial/CopyFile 4.12
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 79.42
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 3.11
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.2
212 TestMainNoArgs 0.03
259 TestStoppedBinaryUpgrade/Setup 2
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
276 TestNoKubernetes/serial/ProfileList 31.31
277 TestNoKubernetes/serial/Stop 3.26
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
289 TestStoppedBinaryUpgrade/MinikubeLogs 0.62
294 TestStartStop/group/old-k8s-version/serial/Stop 2.04
295 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
307 TestStartStop/group/no-preload/serial/Stop 2.08
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
316 TestStartStop/group/embed-certs/serial/Stop 3.9
319 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 1.89
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
338 TestStartStop/group/newest-cni/serial/Stop 2
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.14
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-639000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-639000: exit status 85 (93.750792ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-639000 | jenkins | v1.34.0 | 12 Sep 24 14:27 PDT |          |
	|         | -p download-only-639000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 14:27:59
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 14:27:59.692886    1786 out.go:345] Setting OutFile to fd 1 ...
	I0912 14:27:59.693031    1786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 14:27:59.693034    1786 out.go:358] Setting ErrFile to fd 2...
	I0912 14:27:59.693037    1786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 14:27:59.693201    1786 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	W0912 14:27:59.693280    1786 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19616-1259/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19616-1259/.minikube/config/config.json: no such file or directory
	I0912 14:27:59.694596    1786 out.go:352] Setting JSON to true
	I0912 14:27:59.712051    1786 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1643,"bootTime":1726174836,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 14:27:59.712119    1786 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 14:27:59.718593    1786 out.go:97] [download-only-639000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 14:27:59.718721    1786 notify.go:220] Checking for updates...
	W0912 14:27:59.718752    1786 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball: no such file or directory
	I0912 14:27:59.720008    1786 out.go:169] MINIKUBE_LOCATION=19616
	I0912 14:27:59.722483    1786 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 14:27:59.727525    1786 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 14:27:59.729166    1786 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 14:27:59.732519    1786 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	W0912 14:27:59.738534    1786 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0912 14:27:59.738790    1786 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 14:27:59.743467    1786 out.go:97] Using the qemu2 driver based on user configuration
	I0912 14:27:59.743485    1786 start.go:297] selected driver: qemu2
	I0912 14:27:59.743500    1786 start.go:901] validating driver "qemu2" against <nil>
	I0912 14:27:59.743585    1786 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 14:27:59.746471    1786 out.go:169] Automatically selected the socket_vmnet network
	I0912 14:27:59.752084    1786 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0912 14:27:59.752174    1786 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0912 14:27:59.752236    1786 cni.go:84] Creating CNI manager for ""
	I0912 14:27:59.752252    1786 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0912 14:27:59.752298    1786 start.go:340] cluster config:
	{Name:download-only-639000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-639000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 14:27:59.757396    1786 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 14:27:59.761522    1786 out.go:97] Downloading VM boot image ...
	I0912 14:27:59.761546    1786 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso
	I0912 14:28:17.053394    1786 out.go:97] Starting "download-only-639000" primary control-plane node in "download-only-639000" cluster
	I0912 14:28:17.053423    1786 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0912 14:28:17.123500    1786 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0912 14:28:17.123534    1786 cache.go:56] Caching tarball of preloaded images
	I0912 14:28:17.123708    1786 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0912 14:28:17.128705    1786 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0912 14:28:17.128714    1786 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0912 14:28:17.215666    1786 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0912 14:28:27.388393    1786 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0912 14:28:27.388571    1786 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0912 14:28:28.084022    1786 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0912 14:28:28.084209    1786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/download-only-639000/config.json ...
	I0912 14:28:28.084227    1786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/download-only-639000/config.json: {Name:mk21f07567c0099c45babb8851d4182d9e947dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:28:28.084460    1786 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0912 14:28:28.084655    1786 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0912 14:28:28.636305    1786 out.go:193] 
	W0912 14:28:28.644235    1786 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19616-1259/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106d07e80 0x106d07e80 0x106d07e80 0x106d07e80 0x106d07e80 0x106d07e80 0x106d07e80] Decompressors:map[bz2:0x140007d3440 gz:0x140007d3448 tar:0x140007d33f0 tar.bz2:0x140007d3400 tar.gz:0x140007d3410 tar.xz:0x140007d3420 tar.zst:0x140007d3430 tbz2:0x140007d3400 tgz:0x140007d3410 txz:0x140007d3420 tzst:0x140007d3430 xz:0x140007d3450 zip:0x140007d3460 zst:0x140007d3458] Getters:map[file:0x140004a2fb0 http:0x140000b4fa0 https:0x140000b4ff0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0912 14:28:28.644260    1786 out_reason.go:110] 
	W0912 14:28:28.654212    1786 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 14:28:28.658126    1786 out.go:193] 
	
	
	* The control-plane node download-only-639000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-639000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-639000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (11.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-057000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-057000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (11.972709875s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (11.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-057000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-057000: exit status 85 (79.236834ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-639000 | jenkins | v1.34.0 | 12 Sep 24 14:27 PDT |                     |
	|         | -p download-only-639000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 12 Sep 24 14:28 PDT | 12 Sep 24 14:28 PDT |
	| delete  | -p download-only-639000        | download-only-639000 | jenkins | v1.34.0 | 12 Sep 24 14:28 PDT | 12 Sep 24 14:28 PDT |
	| start   | -o=json --download-only        | download-only-057000 | jenkins | v1.34.0 | 12 Sep 24 14:28 PDT |                     |
	|         | -p download-only-057000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 14:28:29
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 14:28:29.070095    1815 out.go:345] Setting OutFile to fd 1 ...
	I0912 14:28:29.070218    1815 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 14:28:29.070221    1815 out.go:358] Setting ErrFile to fd 2...
	I0912 14:28:29.070224    1815 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 14:28:29.070347    1815 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 14:28:29.071420    1815 out.go:352] Setting JSON to true
	I0912 14:28:29.087344    1815 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1673,"bootTime":1726174836,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 14:28:29.087415    1815 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 14:28:29.092199    1815 out.go:97] [download-only-057000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 14:28:29.092310    1815 notify.go:220] Checking for updates...
	I0912 14:28:29.096211    1815 out.go:169] MINIKUBE_LOCATION=19616
	I0912 14:28:29.099113    1815 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 14:28:29.103178    1815 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 14:28:29.106190    1815 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 14:28:29.109183    1815 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	W0912 14:28:29.115144    1815 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0912 14:28:29.115316    1815 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 14:28:29.118117    1815 out.go:97] Using the qemu2 driver based on user configuration
	I0912 14:28:29.118125    1815 start.go:297] selected driver: qemu2
	I0912 14:28:29.118129    1815 start.go:901] validating driver "qemu2" against <nil>
	I0912 14:28:29.118167    1815 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 14:28:29.121117    1815 out.go:169] Automatically selected the socket_vmnet network
	I0912 14:28:29.126233    1815 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0912 14:28:29.126334    1815 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0912 14:28:29.126352    1815 cni.go:84] Creating CNI manager for ""
	I0912 14:28:29.126361    1815 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 14:28:29.126373    1815 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 14:28:29.126419    1815 start.go:340] cluster config:
	{Name:download-only-057000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-057000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 14:28:29.129818    1815 iso.go:125] acquiring lock: {Name:mk3fa9c1c2a2731193a2e63235bbe8976497103e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 14:28:29.133144    1815 out.go:97] Starting "download-only-057000" primary control-plane node in "download-only-057000" cluster
	I0912 14:28:29.133155    1815 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 14:28:29.190970    1815 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 14:28:29.190986    1815 cache.go:56] Caching tarball of preloaded images
	I0912 14:28:29.191162    1815 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 14:28:29.196416    1815 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0912 14:28:29.196425    1815 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0912 14:28:29.276272    1815 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 14:28:39.058587    1815 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0912 14:28:39.058744    1815 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19616-1259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-057000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-057000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-057000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.36s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-484000 --alsologtostderr --binary-mirror http://127.0.0.1:49312 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-484000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-484000
--- PASS: TestBinaryMirror (0.36s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-094000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-094000: exit status 85 (60.995625ms)

                                                
                                                
-- stdout --
	* Profile "addons-094000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-094000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-094000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-094000: exit status 85 (57.050916ms)

                                                
                                                
-- stdout --
	* Profile "addons-094000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-094000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (205.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-094000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-094000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m25.064162584s)
--- PASS: TestAddons/Setup (205.06s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 8.456333ms
addons_test.go:905: volcano-admission stabilized in 8.491541ms
addons_test.go:913: volcano-controller stabilized in 8.520958ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-nb28v" [929e8894-de78-4cb2-9af0-26219ebdf022] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.008795209s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-c6fwz" [96845e54-4d45-4db8-9aa2-07aff5182cbd] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.011019208s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-ltcrf" [0480bcdf-013d-43d7-a48f-7b4ddcbe80fe] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.005608s
addons_test.go:932: (dbg) Run:  kubectl --context addons-094000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-094000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-094000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [653fcbd9-1418-4832-957f-e29bd81856f8] Pending
helpers_test.go:344: "test-job-nginx-0" [653fcbd9-1418-4832-957f-e29bd81856f8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [653fcbd9-1418-4832-957f-e29bd81856f8] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.005150208s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-094000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-094000 addons disable volcano --alsologtostderr -v=1: (10.032434125s)
--- PASS: TestAddons/serial/Volcano (40.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-094000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-094000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-094000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-094000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-094000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [cb1c098e-e4e6-4e45-a9a5-f281e2cb59c1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [cb1c098e-e4e6-4e45-a9a5-f281e2cb59c1] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.011513667s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-094000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-094000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-094000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-094000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-094000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-094000 addons disable ingress --alsologtostderr -v=1: (7.293652791s)
--- PASS: TestAddons/parallel/Ingress (18.19s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-9wp6w" [26a80f14-dce3-4096-a856-d124f0eb4620] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.006543458s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-094000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-094000: (5.281265792s)
--- PASS: TestAddons/parallel/InspektorGadget (10.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.245625ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-kwgtm" [15370a23-e77e-4961-8c9d-79e7c4de4ce9] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005835458s
addons_test.go:417: (dbg) Run:  kubectl --context addons-094000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-094000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.27s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.38s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 2.755208ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-094000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-094000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-094000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-094000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-094000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-094000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ee21d37e-4195-4a5a-9677-68979c7c7246] Pending
helpers_test.go:344: "task-pv-pod" [ee21d37e-4195-4a5a-9677-68979c7c7246] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ee21d37e-4195-4a5a-9677-68979c7c7246] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.00391625s
addons_test.go:590: (dbg) Run:  kubectl --context addons-094000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-094000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-094000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-094000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-094000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-094000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-094000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-094000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-094000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-094000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-094000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-094000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-094000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-094000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-094000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-094000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-094000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-094000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-094000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-094000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-094000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-094000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-094000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-094000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [cb853509-5056-49ce-80d5-49b4a8483736] Pending
helpers_test.go:344: "task-pv-pod-restore" [cb853509-5056-49ce-80d5-49b4a8483736] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [cb853509-5056-49ce-80d5-49b4a8483736] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.008156084s
addons_test.go:632: (dbg) Run:  kubectl --context addons-094000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-094000 delete pod task-pv-pod-restore: (1.114946459s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-094000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-094000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-094000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-094000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.145653417s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-094000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (44.38s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-094000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-r5t2h" [6a6734ac-9ea2-4baf-ae88-9e4c754f4df0] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-r5t2h" [6a6734ac-9ea2-4baf-ae88-9e4c754f4df0] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.0106375s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-094000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-094000 addons disable headlamp --alsologtostderr -v=1: (5.302300417s)
--- PASS: TestAddons/parallel/Headlamp (17.69s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.22s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-jmz4x" [68117182-0af8-4761-8c64-96bea616d667] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006586625s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-094000
--- PASS: TestAddons/parallel/CloudSpanner (5.22s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (42.08s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-094000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-094000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-094000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-094000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-094000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-094000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-094000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-094000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [398d8dbb-345b-4421-b43b-31488b83f81e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [398d8dbb-345b-4421-b43b-31488b83f81e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [398d8dbb-345b-4421-b43b-31488b83f81e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.002194084s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-094000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-094000 ssh "cat /opt/local-path-provisioner/pvc-ccf8019f-9494-470c-adb1-2ca3643c7b43_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-094000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-094000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-094000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-094000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.57940625s)
--- PASS: TestAddons/parallel/LocalPath (42.08s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.2s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-ccfkk" [0f414de0-66c3-4a20-adf0-75eebe79b9d9] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.010072167s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-094000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.20s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-t5prc" [974d6072-e70d-40aa-a95e-2d83372daf70] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.006948875s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-094000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-094000 addons disable yakd --alsologtostderr -v=1: (5.28027575s)
--- PASS: TestAddons/parallel/Yakd (10.29s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.42s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-094000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-094000: (12.229935333s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-094000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-094000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-094000
--- PASS: TestAddons/StoppedEnableDisable (12.42s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.37s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.37s)

                                                
                                    
x
+
TestErrorSpam/setup (34.49s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-840000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-840000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-840000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-840000 --driver=qemu2 : (34.485752625s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1."
--- PASS: TestErrorSpam/setup (34.49s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-840000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-840000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-840000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-840000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-840000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-840000 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-840000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-840000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-840000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-840000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-840000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-840000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-840000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-840000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-840000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-840000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-840000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-840000 pause
--- PASS: TestErrorSpam/pause (0.68s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-840000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-840000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-840000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-840000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-840000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-840000 unpause
--- PASS: TestErrorSpam/unpause (0.62s)

                                                
                                    
x
+
TestErrorSpam/stop (55.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-840000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-840000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-840000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-840000 stop: (3.190114542s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-840000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-840000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-840000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-840000 stop: (26.039787667s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-840000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-840000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-840000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-840000 stop: (26.033281709s)
--- PASS: TestErrorSpam/stop (55.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19616-1259/.minikube/files/etc/test/nested/copy/1784/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.83s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-384000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-384000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (48.828834083s)
--- PASS: TestFunctional/serial/StartWithProxy (48.83s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.61s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-384000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-384000 --alsologtostderr -v=8: (38.608648625s)
functional_test.go:663: soft start took 38.609047375s for "functional-384000" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.61s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-384000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-384000 cache add registry.k8s.io/pause:3.1: (1.023503375s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-384000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local333344886/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 cache add minikube-local-cache-test:functional-384000
functional_test.go:1089: (dbg) Done: out/minikube-darwin-arm64 -p functional-384000 cache add minikube-local-cache-test:functional-384000: (1.311888083s)
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 cache delete minikube-local-cache-test:functional-384000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-384000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-384000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (65.655875ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 kubectl -- --context functional-384000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.83s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-384000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-384000 get pods: (1.013275292s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.01s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.53s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-384000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-384000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.531291917s)
functional_test.go:761: restart took 37.53142475s for "functional-384000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.53s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-384000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.62s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd1245762423/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.61s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.96s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-384000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-384000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-384000: exit status 115 (140.028959ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31502 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-384000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.96s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-384000 config get cpus: exit status 14 (32.228208ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-384000 config get cpus: exit status 14 (32.074333ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-384000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-384000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2958: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.09s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-384000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-384000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (112.659375ms)

                                                
                                                
-- stdout --
	* [functional-384000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 14:47:29.561372    2945 out.go:345] Setting OutFile to fd 1 ...
	I0912 14:47:29.561531    2945 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 14:47:29.561534    2945 out.go:358] Setting ErrFile to fd 2...
	I0912 14:47:29.561536    2945 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 14:47:29.561692    2945 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 14:47:29.562854    2945 out.go:352] Setting JSON to false
	I0912 14:47:29.579125    2945 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2813,"bootTime":1726174836,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 14:47:29.579193    2945 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 14:47:29.583908    2945 out.go:177] * [functional-384000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0912 14:47:29.590753    2945 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 14:47:29.590818    2945 notify.go:220] Checking for updates...
	I0912 14:47:29.597773    2945 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 14:47:29.600744    2945 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 14:47:29.603764    2945 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 14:47:29.606764    2945 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 14:47:29.609715    2945 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 14:47:29.613012    2945 config.go:182] Loaded profile config "functional-384000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 14:47:29.613285    2945 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 14:47:29.617814    2945 out.go:177] * Using the qemu2 driver based on existing profile
	I0912 14:47:29.624760    2945 start.go:297] selected driver: qemu2
	I0912 14:47:29.624766    2945 start.go:901] validating driver "qemu2" against &{Name:functional-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-384000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 14:47:29.624809    2945 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 14:47:29.629735    2945 out.go:201] 
	W0912 14:47:29.633704    2945 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0912 14:47:29.637789    2945 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-384000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-384000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-384000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (108.921791ms)

                                                
                                                
-- stdout --
	* [functional-384000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 14:47:29.212017    2935 out.go:345] Setting OutFile to fd 1 ...
	I0912 14:47:29.212143    2935 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 14:47:29.212146    2935 out.go:358] Setting ErrFile to fd 2...
	I0912 14:47:29.212148    2935 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 14:47:29.212278    2935 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
	I0912 14:47:29.213702    2935 out.go:352] Setting JSON to false
	I0912 14:47:29.231054    2935 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2813,"bootTime":1726174836,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 14:47:29.231141    2935 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0912 14:47:29.235424    2935 out.go:177] * [functional-384000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0912 14:47:29.243562    2935 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 14:47:29.243621    2935 notify.go:220] Checking for updates...
	I0912 14:47:29.250649    2935 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	I0912 14:47:29.253615    2935 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 14:47:29.256620    2935 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 14:47:29.259594    2935 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	I0912 14:47:29.262599    2935 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 14:47:29.264324    2935 config.go:182] Loaded profile config "functional-384000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 14:47:29.264580    2935 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 14:47:29.268532    2935 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0912 14:47:29.275411    2935 start.go:297] selected driver: qemu2
	I0912 14:47:29.275418    2935 start.go:901] validating driver "qemu2" against &{Name:functional-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-384000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 14:47:29.275489    2935 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 14:47:29.281587    2935 out.go:201] 
	W0912 14:47:29.285693    2935 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0912 14:47:29.289525    2935 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [dafa81c3-ffc7-47d2-bdc1-e11c768e8e5e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003616667s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-384000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-384000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-384000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-384000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [905270be-c518-484c-b553-80a49db49560] Pending
helpers_test.go:344: "sp-pod" [905270be-c518-484c-b553-80a49db49560] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [905270be-c518-484c-b553-80a49db49560] Running
E0912 14:47:08.255211    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.003824125s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-384000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-384000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-384000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d4bac840-c332-4c8f-873b-40c1fb82ff12] Pending
helpers_test.go:344: "sp-pod" [d4bac840-c332-4c8f-873b-40c1fb82ff12] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d4bac840-c332-4c8f-873b-40c1fb82ff12] Running
E0912 14:47:17.225893    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004140875s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-384000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.10s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh -n functional-384000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 cp functional-384000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd138548721/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh -n functional-384000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh -n functional-384000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1784/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh "sudo cat /etc/test/nested/copy/1784/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1784.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh "sudo cat /etc/ssl/certs/1784.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1784.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh "sudo cat /usr/share/ca-certificates/1784.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/17842.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh "sudo cat /etc/ssl/certs/17842.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/17842.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh "sudo cat /usr/share/ca-certificates/17842.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-384000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-384000 ssh "sudo systemctl is-active crio": exit status 1 (67.107125ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-384000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-384000
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-384000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-384000 image ls --format short --alsologtostderr:
I0912 14:47:37.132526    2964 out.go:345] Setting OutFile to fd 1 ...
I0912 14:47:37.132674    2964 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 14:47:37.132678    2964 out.go:358] Setting ErrFile to fd 2...
I0912 14:47:37.132680    2964 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 14:47:37.132814    2964 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
I0912 14:47:37.133254    2964 config.go:182] Loaded profile config "functional-384000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 14:47:37.133317    2964 config.go:182] Loaded profile config "functional-384000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 14:47:37.134191    2964 ssh_runner.go:195] Run: systemctl --version
I0912 14:47:37.134200    2964 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/functional-384000/id_rsa Username:docker}
I0912 14:47:37.156112    2964 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-384000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/library/minikube-local-cache-test | functional-384000 | 2cd7f56cdde29 | 30B    |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| docker.io/kicbase/echo-server               | functional-384000 | ce2d2cda2d858 | 4.78MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-384000 image ls --format table --alsologtostderr:
I0912 14:47:37.339466    2970 out.go:345] Setting OutFile to fd 1 ...
I0912 14:47:37.339599    2970 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 14:47:37.339606    2970 out.go:358] Setting ErrFile to fd 2...
I0912 14:47:37.339608    2970 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 14:47:37.339738    2970 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
I0912 14:47:37.340216    2970 config.go:182] Loaded profile config "functional-384000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 14:47:37.340278    2970 config.go:182] Loaded profile config "functional-384000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 14:47:37.341118    2970 ssh_runner.go:195] Run: systemctl --version
I0912 14:47:37.341128    2970 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/functional-384000/id_rsa Username:docker}
I0912 14:47:37.362113    2970 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-384000 image ls --format json --alsologtostderr:
[{"id":"2cd7f56cdde293fa454d3df54492dbcf2350c1d6ab0c5bfbd909c05e2d2857bb","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-384000"],"size":"30"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e
5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8
s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-384000"],"size":"4780000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-384000 image ls --format json --alsologtostderr:
I0912 14:47:37.199366    2966 out.go:345] Setting OutFile to fd 1 ...
I0912 14:47:37.199508    2966 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 14:47:37.199514    2966 out.go:358] Setting ErrFile to fd 2...
I0912 14:47:37.199517    2966 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 14:47:37.199646    2966 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
I0912 14:47:37.200041    2966 config.go:182] Loaded profile config "functional-384000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 14:47:37.200102    2966 config.go:182] Loaded profile config "functional-384000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 14:47:37.200936    2966 ssh_runner.go:195] Run: systemctl --version
I0912 14:47:37.200952    2966 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/functional-384000/id_rsa Username:docker}
I0912 14:47:37.222445    2966 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-384000 image ls --format yaml --alsologtostderr:
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-384000
size: "4780000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 2cd7f56cdde293fa454d3df54492dbcf2350c1d6ab0c5bfbd909c05e2d2857bb
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-384000
size: "30"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-384000 image ls --format yaml --alsologtostderr:
I0912 14:47:37.269663    2968 out.go:345] Setting OutFile to fd 1 ...
I0912 14:47:37.269824    2968 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 14:47:37.269827    2968 out.go:358] Setting ErrFile to fd 2...
I0912 14:47:37.269830    2968 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 14:47:37.269952    2968 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
I0912 14:47:37.270382    2968 config.go:182] Loaded profile config "functional-384000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 14:47:37.270442    2968 config.go:182] Loaded profile config "functional-384000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 14:47:37.271252    2968 ssh_runner.go:195] Run: systemctl --version
I0912 14:47:37.271261    2968 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/functional-384000/id_rsa Username:docker}
I0912 14:47:37.292586    2968 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-384000 ssh pgrep buildkitd: exit status 1 (55.97775ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 image build -t localhost/my-image:functional-384000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-384000 image build -t localhost/my-image:functional-384000 testdata/build --alsologtostderr: (1.759186542s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-384000 image build -t localhost/my-image:functional-384000 testdata/build --alsologtostderr:
I0912 14:47:37.462511    2974 out.go:345] Setting OutFile to fd 1 ...
I0912 14:47:37.462732    2974 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 14:47:37.462736    2974 out.go:358] Setting ErrFile to fd 2...
I0912 14:47:37.462738    2974 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 14:47:37.462872    2974 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19616-1259/.minikube/bin
I0912 14:47:37.463325    2974 config.go:182] Loaded profile config "functional-384000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 14:47:37.464147    2974 config.go:182] Loaded profile config "functional-384000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 14:47:37.465077    2974 ssh_runner.go:195] Run: systemctl --version
I0912 14:47:37.465085    2974 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19616-1259/.minikube/machines/functional-384000/id_rsa Username:docker}
I0912 14:47:37.488179    2974 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.856248395.tar
I0912 14:47:37.488249    2974 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0912 14:47:37.496397    2974 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.856248395.tar
I0912 14:47:37.497910    2974 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.856248395.tar: stat -c "%s %y" /var/lib/minikube/build/build.856248395.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.856248395.tar': No such file or directory
I0912 14:47:37.497922    2974 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.856248395.tar --> /var/lib/minikube/build/build.856248395.tar (3072 bytes)
I0912 14:47:37.506691    2974 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.856248395
I0912 14:47:37.511017    2974 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.856248395 -xf /var/lib/minikube/build/build.856248395.tar
I0912 14:47:37.514157    2974 docker.go:360] Building image: /var/lib/minikube/build/build.856248395
I0912 14:47:37.514208    2974 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-384000 /var/lib/minikube/build/build.856248395
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:f2f7e6ad96fb1af5953f4a501f48d2f74c5f27f8470f0f1292a62d3f2f656c02 done
#8 naming to localhost/my-image:functional-384000 done
#8 DONE 0.0s
I0912 14:47:39.096528    2974 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-384000 /var/lib/minikube/build/build.856248395: (1.582343875s)
I0912 14:47:39.096596    2974 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.856248395
I0912 14:47:39.102864    2974 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.856248395.tar
I0912 14:47:39.106874    2974 build_images.go:217] Built localhost/my-image:functional-384000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.856248395.tar
I0912 14:47:39.106894    2974 build_images.go:133] succeeded building to: functional-384000
I0912 14:47:39.106897    2974 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.769232584s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-384000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 image load --daemon kicbase/echo-server:functional-384000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 image load --daemon kicbase/echo-server:functional-384000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-384000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 image load --daemon kicbase/echo-server:functional-384000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 image save kicbase/echo-server:functional-384000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 image rm kicbase/echo-server:functional-384000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-384000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 image save --daemon kicbase/echo-server:functional-384000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-384000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-384000 docker-env) && out/minikube-darwin-arm64 status -p functional-384000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-384000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-384000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-384000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-384000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2807: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-384000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-384000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-384000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c9dfa1c3-5690-4290-8512-fa876a00e526] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c9dfa1c3-5690-4290-8512-fa876a00e526] Running
E0912 14:47:06.956835    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt: no such file or directory" logger="UnhandledError"
E0912 14:47:06.964893    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt: no such file or directory" logger="UnhandledError"
E0912 14:47:06.976414    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt: no such file or directory" logger="UnhandledError"
E0912 14:47:06.999743    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt: no such file or directory" logger="UnhandledError"
E0912 14:47:07.043073    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt: no such file or directory" logger="UnhandledError"
E0912 14:47:07.126429    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt: no such file or directory" logger="UnhandledError"
E0912 14:47:07.288782    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt: no such file or directory" logger="UnhandledError"
E0912 14:47:07.611457    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.005609s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-384000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.125.237 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-384000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-384000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1388454592/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726177628839905000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1388454592/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726177628839905000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1388454592/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726177628839905000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1388454592/001/test-1726177628839905000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-384000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (57.808833ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh -- ls -la /mount-9p
E0912 14:47:09.538711    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 12 21:47 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 12 21:47 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 12 21:47 test-1726177628839905000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh cat /mount-9p/test-1726177628839905000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-384000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [55b4496f-1b2d-496f-aec7-3fe55ed51783] Pending
helpers_test.go:344: "busybox-mount" [55b4496f-1b2d-496f-aec7-3fe55ed51783] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [55b4496f-1b2d-496f-aec7-3fe55ed51783] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
E0912 14:47:12.102442    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [55b4496f-1b2d-496f-aec7-3fe55ed51783] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004725458s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-384000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-384000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1388454592/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-384000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3978805932/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-384000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (56.9ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-384000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3978805932/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-384000 ssh "sudo umount -f /mount-9p": exit status 1 (59.970541ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-384000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-384000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3978805932/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-384000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1124139616/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-384000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1124139616/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-384000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1124139616/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-384000 ssh "findmnt -T" /mount1: exit status 1 (71.183208ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-384000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-384000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1124139616/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-384000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1124139616/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-384000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1124139616/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-384000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-384000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-rh5f2" [4a29721f-ddf2-4c3b-b13b-cd916dc732a5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-rh5f2" [4a29721f-ddf2-4c3b-b13b-cd916dc732a5] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.006946417s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 service list -o json
functional_test.go:1494: Took "283.481792ms" to run "out/minikube-darwin-arm64 -p functional-384000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:30823
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-384000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:30823
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "80.764834ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "33.56475ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "82.0475ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "33.614375ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-384000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-384000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-384000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (193.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-771000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0912 14:47:47.953177    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt: no such file or directory" logger="UnhandledError"
E0912 14:48:28.915605    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt: no such file or directory" logger="UnhandledError"
E0912 14:49:50.836663    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/addons-094000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-771000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m12.832528916s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (193.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-771000 -- rollout status deployment/busybox: (2.683884833s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- exec busybox-7dff88458-5fjfj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- exec busybox-7dff88458-m89qh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- exec busybox-7dff88458-pfdzj -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- exec busybox-7dff88458-5fjfj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- exec busybox-7dff88458-m89qh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- exec busybox-7dff88458-pfdzj -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- exec busybox-7dff88458-5fjfj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- exec busybox-7dff88458-m89qh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- exec busybox-7dff88458-pfdzj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- exec busybox-7dff88458-5fjfj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- exec busybox-7dff88458-5fjfj -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- exec busybox-7dff88458-m89qh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- exec busybox-7dff88458-m89qh -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- exec busybox-7dff88458-pfdzj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-771000 -- exec busybox-7dff88458-pfdzj -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-771000 -v=7 --alsologtostderr
E0912 14:51:52.737880    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/functional-384000/client.crt: no such file or directory" logger="UnhandledError"
E0912 14:51:52.745559    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/functional-384000/client.crt: no such file or directory" logger="UnhandledError"
E0912 14:51:52.758900    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/functional-384000/client.crt: no such file or directory" logger="UnhandledError"
E0912 14:51:52.782287    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/functional-384000/client.crt: no such file or directory" logger="UnhandledError"
E0912 14:51:52.824787    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/functional-384000/client.crt: no such file or directory" logger="UnhandledError"
E0912 14:51:52.906645    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/functional-384000/client.crt: no such file or directory" logger="UnhandledError"
E0912 14:51:53.068673    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/functional-384000/client.crt: no such file or directory" logger="UnhandledError"
E0912 14:51:53.392085    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/functional-384000/client.crt: no such file or directory" logger="UnhandledError"
E0912 14:51:54.035544    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/functional-384000/client.crt: no such file or directory" logger="UnhandledError"
E0912 14:51:55.318556    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/functional-384000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-771000 -v=7 --alsologtostderr: (52.964057917s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-771000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 cp testdata/cp-test.txt ha-771000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 cp ha-771000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3028484419/001/cp-test_ha-771000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 cp ha-771000:/home/docker/cp-test.txt ha-771000-m02:/home/docker/cp-test_ha-771000_ha-771000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000-m02 "sudo cat /home/docker/cp-test_ha-771000_ha-771000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 cp ha-771000:/home/docker/cp-test.txt ha-771000-m03:/home/docker/cp-test_ha-771000_ha-771000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000-m03 "sudo cat /home/docker/cp-test_ha-771000_ha-771000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 cp ha-771000:/home/docker/cp-test.txt ha-771000-m04:/home/docker/cp-test_ha-771000_ha-771000-m04.txt
E0912 14:51:57.880734    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/functional-384000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000-m04 "sudo cat /home/docker/cp-test_ha-771000_ha-771000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 cp testdata/cp-test.txt ha-771000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 cp ha-771000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3028484419/001/cp-test_ha-771000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 cp ha-771000-m02:/home/docker/cp-test.txt ha-771000:/home/docker/cp-test_ha-771000-m02_ha-771000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000 "sudo cat /home/docker/cp-test_ha-771000-m02_ha-771000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 cp ha-771000-m02:/home/docker/cp-test.txt ha-771000-m03:/home/docker/cp-test_ha-771000-m02_ha-771000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000-m03 "sudo cat /home/docker/cp-test_ha-771000-m02_ha-771000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 cp ha-771000-m02:/home/docker/cp-test.txt ha-771000-m04:/home/docker/cp-test_ha-771000-m02_ha-771000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000-m04 "sudo cat /home/docker/cp-test_ha-771000-m02_ha-771000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 cp testdata/cp-test.txt ha-771000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 cp ha-771000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3028484419/001/cp-test_ha-771000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 cp ha-771000-m03:/home/docker/cp-test.txt ha-771000:/home/docker/cp-test_ha-771000-m03_ha-771000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000 "sudo cat /home/docker/cp-test_ha-771000-m03_ha-771000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 cp ha-771000-m03:/home/docker/cp-test.txt ha-771000-m02:/home/docker/cp-test_ha-771000-m03_ha-771000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000-m02 "sudo cat /home/docker/cp-test_ha-771000-m03_ha-771000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 cp ha-771000-m03:/home/docker/cp-test.txt ha-771000-m04:/home/docker/cp-test_ha-771000-m03_ha-771000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000-m04 "sudo cat /home/docker/cp-test_ha-771000-m03_ha-771000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 cp testdata/cp-test.txt ha-771000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 cp ha-771000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3028484419/001/cp-test_ha-771000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 cp ha-771000-m04:/home/docker/cp-test.txt ha-771000:/home/docker/cp-test_ha-771000-m04_ha-771000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000 "sudo cat /home/docker/cp-test_ha-771000-m04_ha-771000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 cp ha-771000-m04:/home/docker/cp-test.txt ha-771000-m02:/home/docker/cp-test_ha-771000-m04_ha-771000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000-m02 "sudo cat /home/docker/cp-test_ha-771000-m04_ha-771000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 cp ha-771000-m04:/home/docker/cp-test.txt ha-771000-m03:/home/docker/cp-test_ha-771000-m04_ha-771000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-771000 ssh -n ha-771000-m03 "sudo cat /home/docker/cp-test_ha-771000-m04_ha-771000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (79.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0912 15:01:52.721512    1784 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19616-1259/.minikube/profiles/functional-384000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m19.418447708s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (79.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-722000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-722000 --output=json --user=testUser: (3.106318208s)
--- PASS: TestJSONOutput/stop/Command (3.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-024000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-024000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.025417ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6f89e197-7f2a-45ff-8b50-fc35f1e29181","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-024000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fb71c8e4-2c79-4d95-adb2-a1660c3c1e54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19616"}}
	{"specversion":"1.0","id":"62d3f809-bb0f-4267-a2fa-553e50751f52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig"}}
	{"specversion":"1.0","id":"13dc76d0-9787-4a0f-845d-b33eeafafed6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"eb40e697-6474-4c40-a715-a8ef1146ebe6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6c2e0c81-f7aa-4fcd-8000-355ffe5dc5f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube"}}
	{"specversion":"1.0","id":"90e7baf7-5b9f-4fb4-8108-97c267c56674","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"83ab4e43-b75a-4dfe-8e4f-d8dfd4148d81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-024000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-024000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-190000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-190000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (102.700834ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-190000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19616-1259/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19616-1259/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-190000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-190000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (40.329458ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-190000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-190000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.674158416s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.631777s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-190000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-190000: (3.260268833s)
--- PASS: TestNoKubernetes/serial/Stop (3.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-190000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-190000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (38.48875ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-190000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-190000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-841000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-196000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-196000 --alsologtostderr -v=3: (2.044683708s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-196000 -n old-k8s-version-196000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-196000 -n old-k8s-version-196000: exit status 7 (55.921125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-196000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-558000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-558000 --alsologtostderr -v=3: (2.081294208s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000: exit status 7 (52.245208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-558000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-818000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-818000 --alsologtostderr -v=3: (3.90229125s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-818000 -n embed-certs-818000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-818000 -n embed-certs-818000: exit status 7 (60.857833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-818000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (1.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-572000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-572000 --alsologtostderr -v=3: (1.888351042s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (1.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-572000 -n default-k8s-diff-port-572000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-572000 -n default-k8s-diff-port-572000: exit status 7 (64.198667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-572000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-807000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-807000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-807000 --alsologtostderr -v=3: (2.002768375s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-807000 -n newest-cni-807000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-807000 -n newest-cni-807000: exit status 7 (68.30675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-807000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/274)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-237000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-237000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-237000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-237000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-237000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-237000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-237000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-237000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-237000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-237000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-237000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-237000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-237000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-237000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-237000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-237000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-237000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-237000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-237000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-237000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-237000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-237000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-237000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-237000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-237000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-237000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-237000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-237000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-237000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-237000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-237000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-237000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-237000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-237000"

                                                
                                                
----------------------- debugLogs end: cilium-237000 [took: 2.218283416s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-237000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-237000
--- SKIP: TestNetworkPlugins/group/cilium (2.32s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-746000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-746000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.10s)

                                                
                                    
Copied to clipboard